Test Report: Docker_Linux_crio_arm64 21932

                    
                      84a896b9ca11c6987b6528b1f6e82b411b2540e2:2025-11-24:42492
                    
                

Test fail (36/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.58
35 TestAddons/parallel/Registry 15.63
36 TestAddons/parallel/RegistryCreds 0.55
37 TestAddons/parallel/Ingress 145.52
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.37
41 TestAddons/parallel/CSI 39.47
42 TestAddons/parallel/Headlamp 3.18
43 TestAddons/parallel/CloudSpanner 5.3
44 TestAddons/parallel/LocalPath 9.44
45 TestAddons/parallel/NvidiaDevicePlugin 6.31
46 TestAddons/parallel/Yakd 6.27
97 TestFunctional/parallel/ServiceCmdConnect 603.39
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.92
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
146 TestFunctional/parallel/ServiceCmd/DeployApp 600.83
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
153 TestFunctional/parallel/ServiceCmd/Format 0.5
154 TestFunctional/parallel/ServiceCmd/URL 0.52
191 TestJSONOutput/pause/Command 1.72
197 TestJSONOutput/unpause/Command 1.93
282 TestPause/serial/Pause 7.56
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.5
304 TestStartStop/group/old-k8s-version/serial/Pause 7.64
310 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.55
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.79
322 TestStartStop/group/no-preload/serial/Pause 7.98
328 TestStartStop/group/embed-certs/serial/Pause 7.58
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.59
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.25
342 TestStartStop/group/newest-cni/serial/Pause 5.89
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.14
x
+
TestAddons/serial/Volcano (0.58s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable volcano --alsologtostderr -v=1: exit status 11 (579.864727ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:17:04.827274   11280 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:17:04.827710   11280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:04.827745   11280 out.go:374] Setting ErrFile to fd 2...
	I1124 13:17:04.827771   11280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:04.828311   11280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:17:04.828755   11280 mustload.go:66] Loading cluster: addons-647907
	I1124 13:17:04.831379   11280 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:04.831471   11280 addons.go:622] checking whether the cluster is paused
	I1124 13:17:04.831680   11280 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:04.831721   11280 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:17:04.832365   11280 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:17:04.874457   11280 ssh_runner.go:195] Run: systemctl --version
	I1124 13:17:04.874523   11280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:17:04.894909   11280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:17:05.018032   11280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:17:05.018148   11280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:17:05.066661   11280 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:17:05.066747   11280 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:17:05.066768   11280 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:17:05.066791   11280 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:17:05.066824   11280 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:17:05.066847   11280 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:17:05.066865   11280 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:17:05.066888   11280 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:17:05.066928   11280 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:17:05.066950   11280 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:17:05.066971   11280 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:17:05.067007   11280 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:17:05.067024   11280 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:17:05.067046   11280 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:17:05.067083   11280 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:17:05.067110   11280 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:17:05.067141   11280 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:17:05.067174   11280 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:17:05.067196   11280 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:17:05.067218   11280 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:17:05.067256   11280 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:17:05.067279   11280 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:17:05.067298   11280 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:17:05.067320   11280 cri.go:89] found id: ""
	I1124 13:17:05.067476   11280 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:17:05.084475   11280 out.go:203] 
	W1124 13:17:05.085886   11280 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:17:05.085917   11280 out.go:285] * 
	* 
	W1124 13:17:05.322012   11280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:17:05.323848   11280 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.511226ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-2l7kt" [9c4c505b-2065-4059-a0a8-66ae39acca38] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002986161s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-9hgsb" [fe9ffd77-b342-49e0-b2bb-e1634ce62247] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003182669s
addons_test.go:392: (dbg) Run:  kubectl --context addons-647907 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-647907 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-647907 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.008962717s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 ip
2025/11/24 13:17:29 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable registry --alsologtostderr -v=1: exit status 11 (317.613033ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:17:30.082807   12199 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:17:30.083055   12199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:30.083070   12199 out.go:374] Setting ErrFile to fd 2...
	I1124 13:17:30.083077   12199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:30.083537   12199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:17:30.084132   12199 mustload.go:66] Loading cluster: addons-647907
	I1124 13:17:30.084933   12199 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:30.084959   12199 addons.go:622] checking whether the cluster is paused
	I1124 13:17:30.085138   12199 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:30.085161   12199 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:17:30.085907   12199 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:17:30.104086   12199 ssh_runner.go:195] Run: systemctl --version
	I1124 13:17:30.104176   12199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:17:30.122467   12199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:17:30.236523   12199 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:17:30.236600   12199 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:17:30.273953   12199 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:17:30.273972   12199 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:17:30.273977   12199 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:17:30.273980   12199 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:17:30.273988   12199 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:17:30.273992   12199 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:17:30.273995   12199 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:17:30.273998   12199 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:17:30.274001   12199 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:17:30.274007   12199 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:17:30.274010   12199 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:17:30.274013   12199 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:17:30.274016   12199 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:17:30.274019   12199 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:17:30.274022   12199 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:17:30.274026   12199 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:17:30.274030   12199 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:17:30.274034   12199 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:17:30.274037   12199 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:17:30.274040   12199 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:17:30.274045   12199 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:17:30.274048   12199 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:17:30.274050   12199 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:17:30.274053   12199 cri.go:89] found id: ""
	I1124 13:17:30.274100   12199 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:17:30.289540   12199 out.go:203] 
	W1124 13:17:30.292721   12199 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:17:30.292745   12199 out.go:285] * 
	* 
	W1124 13:17:30.297148   12199 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:17:30.300339   12199 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.63s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.55s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.723264ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-647907
addons_test.go:332: (dbg) Run:  kubectl --context addons-647907 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (280.863048ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:18:25.817493   13829 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:18:25.817748   13829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:25.817776   13829 out.go:374] Setting ErrFile to fd 2...
	I1124 13:18:25.817795   13829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:25.818077   13829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:18:25.818396   13829 mustload.go:66] Loading cluster: addons-647907
	I1124 13:18:25.818814   13829 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:25.818852   13829 addons.go:622] checking whether the cluster is paused
	I1124 13:18:25.818996   13829 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:25.819025   13829 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:18:25.819598   13829 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:18:25.838693   13829 ssh_runner.go:195] Run: systemctl --version
	I1124 13:18:25.838756   13829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:18:25.856904   13829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:18:25.969779   13829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:18:25.969882   13829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:18:26.002295   13829 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:18:26.002384   13829 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:18:26.002410   13829 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:18:26.002437   13829 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:18:26.002469   13829 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:18:26.002494   13829 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:18:26.002518   13829 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:18:26.002543   13829 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:18:26.002571   13829 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:18:26.002601   13829 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:18:26.002621   13829 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:18:26.002647   13829 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:18:26.002678   13829 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:18:26.002704   13829 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:18:26.002727   13829 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:18:26.002755   13829 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:18:26.002800   13829 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:18:26.002829   13829 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:18:26.002854   13829 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:18:26.002882   13829 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:18:26.002926   13829 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:18:26.002952   13829 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:18:26.002972   13829 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:18:26.002997   13829 cri.go:89] found id: ""
	I1124 13:18:26.003119   13829 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:18:26.020789   13829 out.go:203] 
	W1124 13:18:26.023866   13829 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:18:26.023896   13829 out.go:285] * 
	* 
	W1124 13:18:26.028254   13829 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:18:26.031170   13829 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.55s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-647907 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-647907 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-647907 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [0ea88fb7-36e9-4550-86db-ed6efa178b59] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [0ea88fb7-36e9-4550-86db-ed6efa178b59] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003140098s
I1124 13:17:51.709905    4611 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.332788067s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-647907 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-647907
helpers_test.go:243: (dbg) docker inspect addons-647907:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3",
	        "Created": "2025-11-24T13:14:44.795405416Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5764,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:14:44.8616038Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3/hostname",
	        "HostsPath": "/var/lib/docker/containers/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3/hosts",
	        "LogPath": "/var/lib/docker/containers/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3-json.log",
	        "Name": "/addons-647907",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-647907:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-647907",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3",
	                "LowerDir": "/var/lib/docker/overlay2/83dc36c1c0d9c3009a399933d1eff6bd8f53c389406be1e2b1643f996c3d4cf7-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/83dc36c1c0d9c3009a399933d1eff6bd8f53c389406be1e2b1643f996c3d4cf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/83dc36c1c0d9c3009a399933d1eff6bd8f53c389406be1e2b1643f996c3d4cf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/83dc36c1c0d9c3009a399933d1eff6bd8f53c389406be1e2b1643f996c3d4cf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-647907",
	                "Source": "/var/lib/docker/volumes/addons-647907/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-647907",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-647907",
	                "name.minikube.sigs.k8s.io": "addons-647907",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9c97e8ca3578b37bdb189bf9fb2db7d72212c54939b4e96b709ed6d6d896a380",
	            "SandboxKey": "/var/run/docker/netns/9c97e8ca3578",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-647907": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:03:79:93:d7:24",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bdfb6399110fcb1822ff82a0a9ddcac2babaa574ec38cda228f6cd8fcca07e1e",
	                    "EndpointID": "b14643e12552a796ea7b17975b1a69233b18762ba658c083cf2b24865e9b3bb9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-647907",
	                        "72292e2fa4c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-647907 -n addons-647907
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-647907 logs -n 25: (1.577500466s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-367583                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-367583 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ start   │ --download-only -p binary-mirror-147804 --alsologtostderr --binary-mirror http://127.0.0.1:44105 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-147804   │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ delete  │ -p binary-mirror-147804                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-147804   │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ addons  │ enable dashboard -p addons-647907                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ addons  │ disable dashboard -p addons-647907                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ start   │ -p addons-647907 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-647907 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-647907 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ addons  │ enable headlamp -p addons-647907 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-647907 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-647907 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ ip      │ addons-647907 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-647907 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-647907 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-647907 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ ssh     │ addons-647907 ssh cat /opt/local-path-provisioner/pvc-087c4ef5-30ca-4efc-9e47-792885953111_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-647907 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-647907 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ ssh     │ addons-647907 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-647907 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:18 UTC │                     │
	│ addons  │ addons-647907 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:18 UTC │                     │
	│ addons  │ addons-647907 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:18 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-647907                                                                                                                                                                                                                                                                                                                                                                                           │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:18 UTC │ 24 Nov 25 13:18 UTC │
	│ addons  │ addons-647907 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:18 UTC │                     │
	│ ip      │ addons-647907 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:20 UTC │ 24 Nov 25 13:20 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:14:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:14:18.959254    5361 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:14:18.959722    5361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:18.959778    5361 out.go:374] Setting ErrFile to fd 2...
	I1124 13:14:18.959798    5361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:18.960070    5361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:14:18.960550    5361 out.go:368] Setting JSON to false
	I1124 13:14:18.961281    5361 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3410,"bootTime":1763986649,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 13:14:18.961368    5361 start.go:143] virtualization:  
	I1124 13:14:18.964751    5361 out.go:179] * [addons-647907] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 13:14:18.968490    5361 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:14:18.968559    5361 notify.go:221] Checking for updates...
	I1124 13:14:18.974110    5361 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:14:18.976954    5361 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 13:14:18.979743    5361 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 13:14:18.982412    5361 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 13:14:18.985229    5361 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:14:18.988213    5361 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:14:19.017235    5361 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:14:19.017350    5361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:14:19.076565    5361 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 13:14:19.067653266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:14:19.076669    5361 docker.go:319] overlay module found
	I1124 13:14:19.079841    5361 out.go:179] * Using the docker driver based on user configuration
	I1124 13:14:19.082761    5361 start.go:309] selected driver: docker
	I1124 13:14:19.082786    5361 start.go:927] validating driver "docker" against <nil>
	I1124 13:14:19.082813    5361 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:14:19.083663    5361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:14:19.135898    5361 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 13:14:19.126582675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:14:19.136061    5361 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:14:19.136280    5361 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:14:19.139372    5361 out.go:179] * Using Docker driver with root privileges
	I1124 13:14:19.142408    5361 cni.go:84] Creating CNI manager for ""
	I1124 13:14:19.142478    5361 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:14:19.142490    5361 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:14:19.142579    5361 start.go:353] cluster config:
	{Name:addons-647907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-647907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 13:14:19.145762    5361 out.go:179] * Starting "addons-647907" primary control-plane node in "addons-647907" cluster
	I1124 13:14:19.148630    5361 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:14:19.151553    5361 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:14:19.154399    5361 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:19.154447    5361 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 13:14:19.154461    5361 cache.go:65] Caching tarball of preloaded images
	I1124 13:14:19.154468    5361 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:14:19.154540    5361 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 13:14:19.154550    5361 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:14:19.154890    5361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/config.json ...
	I1124 13:14:19.154920    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/config.json: {Name:mkcdf77b8bac65501405fa44a8ac6bcb96bb5594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:19.170121    5361 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:14:19.170252    5361 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 13:14:19.170270    5361 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 13:14:19.170275    5361 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 13:14:19.170281    5361 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 13:14:19.170286    5361 cache.go:172] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1124 13:14:37.102225    5361 cache.go:174] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1124 13:14:37.102278    5361 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:14:37.102313    5361 start.go:360] acquireMachinesLock for addons-647907: {Name:mk166fce5dc7857652385b2817a2702b00f03887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:14:37.102437    5361 start.go:364] duration metric: took 99.955µs to acquireMachinesLock for "addons-647907"
	I1124 13:14:37.102471    5361 start.go:93] Provisioning new machine with config: &{Name:addons-647907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-647907 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:14:37.102541    5361 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:14:37.104284    5361 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 13:14:37.104518    5361 start.go:159] libmachine.API.Create for "addons-647907" (driver="docker")
	I1124 13:14:37.104553    5361 client.go:173] LocalClient.Create starting
	I1124 13:14:37.104663    5361 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem
	I1124 13:14:37.483211    5361 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem
	I1124 13:14:37.693071    5361 cli_runner.go:164] Run: docker network inspect addons-647907 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:14:37.709134    5361 cli_runner.go:211] docker network inspect addons-647907 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:14:37.709223    5361 network_create.go:284] running [docker network inspect addons-647907] to gather additional debugging logs...
	I1124 13:14:37.709245    5361 cli_runner.go:164] Run: docker network inspect addons-647907
	W1124 13:14:37.725409    5361 cli_runner.go:211] docker network inspect addons-647907 returned with exit code 1
	I1124 13:14:37.725439    5361 network_create.go:287] error running [docker network inspect addons-647907]: docker network inspect addons-647907: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-647907 not found
	I1124 13:14:37.725457    5361 network_create.go:289] output of [docker network inspect addons-647907]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-647907 not found
	
	** /stderr **
	I1124 13:14:37.725559    5361 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:14:37.741545    5361 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191b650}
	I1124 13:14:37.741587    5361 network_create.go:124] attempt to create docker network addons-647907 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 13:14:37.741646    5361 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-647907 addons-647907
	I1124 13:14:37.793309    5361 network_create.go:108] docker network addons-647907 192.168.49.0/24 created
	I1124 13:14:37.793343    5361 kic.go:121] calculated static IP "192.168.49.2" for the "addons-647907" container
	I1124 13:14:37.793411    5361 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:14:37.809413    5361 cli_runner.go:164] Run: docker volume create addons-647907 --label name.minikube.sigs.k8s.io=addons-647907 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:14:37.826744    5361 oci.go:103] Successfully created a docker volume addons-647907
	I1124 13:14:37.826828    5361 cli_runner.go:164] Run: docker run --rm --name addons-647907-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-647907 --entrypoint /usr/bin/test -v addons-647907:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:14:40.269681    5361 cli_runner.go:217] Completed: docker run --rm --name addons-647907-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-647907 --entrypoint /usr/bin/test -v addons-647907:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.442793106s)
	I1124 13:14:40.269707    5361 oci.go:107] Successfully prepared a docker volume addons-647907
	I1124 13:14:40.269752    5361 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:40.269763    5361 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:14:40.269880    5361 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-647907:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:14:44.727168    5361 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-647907:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.457246814s)
	I1124 13:14:44.727217    5361 kic.go:203] duration metric: took 4.457434639s to extract preloaded images to volume ...
	W1124 13:14:44.727348    5361 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 13:14:44.727486    5361 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:14:44.780894    5361 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-647907 --name addons-647907 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-647907 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-647907 --network addons-647907 --ip 192.168.49.2 --volume addons-647907:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:14:45.269267    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Running}}
	I1124 13:14:45.297647    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:14:45.329992    5361 cli_runner.go:164] Run: docker exec addons-647907 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:14:45.387840    5361 oci.go:144] the created container "addons-647907" has a running status.
	I1124 13:14:45.387865    5361 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa...
	I1124 13:14:45.475406    5361 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:14:45.495076    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:14:45.514600    5361 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:14:45.514618    5361 kic_runner.go:114] Args: [docker exec --privileged addons-647907 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:14:45.559844    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:14:45.580212    5361 machine.go:94] provisionDockerMachine start ...
	I1124 13:14:45.580293    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:45.601268    5361 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:45.601612    5361 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 13:14:45.601621    5361 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:14:45.602293    5361 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 13:14:48.754864    5361 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-647907
	
	I1124 13:14:48.754888    5361 ubuntu.go:182] provisioning hostname "addons-647907"
	I1124 13:14:48.754951    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:48.773898    5361 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:48.774218    5361 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 13:14:48.774236    5361 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-647907 && echo "addons-647907" | sudo tee /etc/hostname
	I1124 13:14:48.936633    5361 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-647907
	
	I1124 13:14:48.936711    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:48.956691    5361 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:48.957005    5361 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 13:14:48.957027    5361 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-647907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-647907/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-647907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:14:49.107613    5361 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:14:49.107636    5361 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 13:14:49.107665    5361 ubuntu.go:190] setting up certificates
	I1124 13:14:49.107674    5361 provision.go:84] configureAuth start
	I1124 13:14:49.107734    5361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-647907
	I1124 13:14:49.130873    5361 provision.go:143] copyHostCerts
	I1124 13:14:49.130956    5361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 13:14:49.131086    5361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 13:14:49.131159    5361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 13:14:49.131220    5361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.addons-647907 san=[127.0.0.1 192.168.49.2 addons-647907 localhost minikube]
	I1124 13:14:49.224885    5361 provision.go:177] copyRemoteCerts
	I1124 13:14:49.224948    5361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:14:49.224989    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:49.241343    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:14:49.346887    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:14:49.363979    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 13:14:49.381466    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 13:14:49.398926    5361 provision.go:87] duration metric: took 291.226079ms to configureAuth
	I1124 13:14:49.398957    5361 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:14:49.399177    5361 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:14:49.399289    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:49.416778    5361 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:49.417089    5361 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 13:14:49.417109    5361 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:14:49.723453    5361 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:14:49.723488    5361 machine.go:97] duration metric: took 4.143255809s to provisionDockerMachine
	I1124 13:14:49.723498    5361 client.go:176] duration metric: took 12.618936375s to LocalClient.Create
	I1124 13:14:49.723514    5361 start.go:167] duration metric: took 12.618997667s to libmachine.API.Create "addons-647907"
	I1124 13:14:49.723522    5361 start.go:293] postStartSetup for "addons-647907" (driver="docker")
	I1124 13:14:49.723532    5361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:14:49.723655    5361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:14:49.723733    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:49.742263    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:14:49.847478    5361 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:14:49.850844    5361 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:14:49.850872    5361 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:14:49.850884    5361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 13:14:49.850950    5361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 13:14:49.850984    5361 start.go:296] duration metric: took 127.455859ms for postStartSetup
	I1124 13:14:49.851302    5361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-647907
	I1124 13:14:49.868940    5361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/config.json ...
	I1124 13:14:49.869230    5361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:14:49.869279    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:49.886217    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:14:49.988329    5361 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:14:49.992862    5361 start.go:128] duration metric: took 12.890306522s to createHost
	I1124 13:14:49.992890    5361 start.go:83] releasing machines lock for "addons-647907", held for 12.890440423s
	I1124 13:14:49.993236    5361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-647907
	I1124 13:14:50.016970    5361 ssh_runner.go:195] Run: cat /version.json
	I1124 13:14:50.017028    5361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:14:50.017030    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:50.017090    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:50.048603    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:14:50.049035    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:14:50.151107    5361 ssh_runner.go:195] Run: systemctl --version
	I1124 13:14:50.241651    5361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:14:50.275847    5361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:14:50.280180    5361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:14:50.280279    5361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:14:50.309661    5361 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 13:14:50.309685    5361 start.go:496] detecting cgroup driver to use...
	I1124 13:14:50.309718    5361 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 13:14:50.309767    5361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:14:50.327183    5361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:14:50.339968    5361 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:14:50.340032    5361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:14:50.357604    5361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:14:50.376662    5361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:14:50.505551    5361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:14:50.637075    5361 docker.go:234] disabling docker service ...
	I1124 13:14:50.637163    5361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:14:50.659436    5361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:14:50.672412    5361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:14:50.786818    5361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:14:50.901557    5361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:14:50.915929    5361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:14:50.930164    5361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 13:14:50.930277    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.939036    5361 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 13:14:50.939156    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.947861    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.956829    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.965530    5361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:14:50.973365    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.981798    5361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.995161    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:51.006604    5361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:14:51.015179    5361 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 13:14:51.015250    5361 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 13:14:51.029473    5361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:14:51.037855    5361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:14:51.161247    5361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:14:51.350601    5361 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:14:51.350742    5361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:14:51.354396    5361 start.go:564] Will wait 60s for crictl version
	I1124 13:14:51.354476    5361 ssh_runner.go:195] Run: which crictl
	I1124 13:14:51.357837    5361 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:14:51.384515    5361 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:14:51.384655    5361 ssh_runner.go:195] Run: crio --version
	I1124 13:14:51.413563    5361 ssh_runner.go:195] Run: crio --version
	I1124 13:14:51.446075    5361 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 13:14:51.449013    5361 cli_runner.go:164] Run: docker network inspect addons-647907 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:14:51.465725    5361 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 13:14:51.469639    5361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:14:51.479611    5361 kubeadm.go:884] updating cluster {Name:addons-647907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-647907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:14:51.479744    5361 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:51.479808    5361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:14:51.516483    5361 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:14:51.516508    5361 crio.go:433] Images already preloaded, skipping extraction
	I1124 13:14:51.516568    5361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:14:51.541994    5361 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:14:51.542018    5361 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:14:51.542026    5361 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1124 13:14:51.542114    5361 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-647907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-647907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:14:51.542197    5361 ssh_runner.go:195] Run: crio config
	I1124 13:14:51.597123    5361 cni.go:84] Creating CNI manager for ""
	I1124 13:14:51.597146    5361 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:14:51.597160    5361 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:14:51.597211    5361 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-647907 NodeName:addons-647907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:14:51.597375    5361 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-647907"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:14:51.597452    5361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:14:51.605049    5361 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:14:51.605118    5361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:14:51.612701    5361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 13:14:51.625821    5361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:14:51.639336    5361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1124 13:14:51.652148    5361 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:14:51.655705    5361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:14:51.665681    5361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:14:51.787413    5361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:14:51.802639    5361 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907 for IP: 192.168.49.2
	I1124 13:14:51.802702    5361 certs.go:195] generating shared ca certs ...
	I1124 13:14:51.802732    5361 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:51.802894    5361 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 13:14:52.077328    5361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt ...
	I1124 13:14:52.077364    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt: {Name:mk006816f465c2c5820b705b1ef87c191af5a66e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:52.077575    5361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key ...
	I1124 13:14:52.077589    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key: {Name:mk8a6badd37b65193516f56d8210e821ef116a99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:52.077672    5361 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 13:14:52.181626    5361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt ...
	I1124 13:14:52.181654    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt: {Name:mk2aadb170d054acc188db3efc8c5b2a6b5be842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:52.181824    5361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key ...
	I1124 13:14:52.181836    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key: {Name:mkec5e104c9dfb79f73922b399b39c84b56e6d28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:52.181917    5361 certs.go:257] generating profile certs ...
	I1124 13:14:52.181973    5361 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.key
	I1124 13:14:52.182000    5361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt with IP's: []
	I1124 13:14:53.121408    5361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt ...
	I1124 13:14:53.121443    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: {Name:mk74580fd15c6a031e8b356a42dbab7d3066e438 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.121629    5361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.key ...
	I1124 13:14:53.121643    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.key: {Name:mkab3610f80ed8c7989d7d5ffeb6775d895097f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.121768    5361 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key.a92d6616
	I1124 13:14:53.121789    5361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt.a92d6616 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 13:14:53.449274    5361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt.a92d6616 ...
	I1124 13:14:53.449305    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt.a92d6616: {Name:mkd81e0b0450ff367c0d93f5d17a46e135930fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.449481    5361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key.a92d6616 ...
	I1124 13:14:53.449497    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key.a92d6616: {Name:mka149af667e8c33bbed2aab6934fd007fa6a659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.449614    5361 certs.go:382] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt.a92d6616 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt
	I1124 13:14:53.449701    5361 certs.go:386] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key.a92d6616 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key
	I1124 13:14:53.449761    5361 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.key
	I1124 13:14:53.449781    5361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.crt with IP's: []
	I1124 13:14:53.695377    5361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.crt ...
	I1124 13:14:53.695407    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.crt: {Name:mk1a0ddb087f9b67d6bc4e2c19e0cb9f4b734f49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.695588    5361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.key ...
	I1124 13:14:53.695600    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.key: {Name:mked4da12ada2dc6b0d187dfeebc752f49dd0053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.695807    5361 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:14:53.695853    5361 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:14:53.695885    5361 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:14:53.695923    5361 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 13:14:53.696552    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:14:53.715084    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 13:14:53.733215    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:14:53.751055    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:14:53.769703    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 13:14:53.786821    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 13:14:53.804768    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:14:53.822387    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 13:14:53.840015    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:14:53.859764    5361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:14:53.874245    5361 ssh_runner.go:195] Run: openssl version
	I1124 13:14:53.880665    5361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:14:53.889831    5361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:53.893757    5361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:53.893849    5361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:53.935116    5361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:14:53.943891    5361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:14:53.947601    5361 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:14:53.947659    5361 kubeadm.go:401] StartCluster: {Name:addons-647907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-647907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:14:53.947752    5361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:14:53.947810    5361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:14:53.976056    5361 cri.go:89] found id: ""
	I1124 13:14:53.976148    5361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:14:53.983955    5361 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:14:53.991864    5361 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:14:53.991968    5361 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:14:54.000759    5361 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:14:54.000777    5361 kubeadm.go:158] found existing configuration files:
	
	I1124 13:14:54.000839    5361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:14:54.011741    5361 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:14:54.011842    5361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:14:54.019938    5361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:14:54.028334    5361 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:14:54.028480    5361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:14:54.036364    5361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:14:54.044147    5361 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:14:54.044258    5361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:14:54.051331    5361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:14:54.059108    5361 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:14:54.059173    5361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:14:54.066691    5361 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:14:54.105866    5361 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:14:54.106031    5361 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:14:54.140095    5361 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:14:54.140169    5361 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 13:14:54.140209    5361 kubeadm.go:319] OS: Linux
	I1124 13:14:54.140258    5361 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:14:54.140316    5361 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 13:14:54.140378    5361 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:14:54.140430    5361 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:14:54.140482    5361 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:14:54.140533    5361 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:14:54.140587    5361 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:14:54.140638    5361 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:14:54.140688    5361 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 13:14:54.219931    5361 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:14:54.220045    5361 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:14:54.220142    5361 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:14:54.227921    5361 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:14:54.231273    5361 out.go:252]   - Generating certificates and keys ...
	I1124 13:14:54.231441    5361 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:14:54.231564    5361 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:14:54.424452    5361 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:14:54.567937    5361 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:14:54.760763    5361 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:14:56.007671    5361 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:14:56.720997    5361 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:14:56.721385    5361 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-647907 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 13:14:56.944283    5361 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:14:56.944619    5361 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-647907 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 13:14:57.122470    5361 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:14:57.307920    5361 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:14:58.732174    5361 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:14:58.732595    5361 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:14:58.868347    5361 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:14:59.560600    5361 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:14:59.787797    5361 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:14:59.933101    5361 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:15:00.204718    5361 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:15:00.204955    5361 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:15:00.205028    5361 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:15:00.233308    5361 out.go:252]   - Booting up control plane ...
	I1124 13:15:00.233413    5361 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:15:00.233492    5361 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:15:00.233561    5361 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:15:00.233665    5361 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:15:00.233759    5361 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 13:15:00.245353    5361 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 13:15:00.260773    5361 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:15:00.260903    5361 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:15:00.504750    5361 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 13:15:00.504869    5361 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 13:15:01.504539    5361 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001092887s
	I1124 13:15:01.508207    5361 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 13:15:01.508299    5361 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 13:15:01.508421    5361 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 13:15:01.508496    5361 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 13:15:05.756080    5361 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.247178313s
	I1124 13:15:06.112181    5361 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.603944185s
	I1124 13:15:08.010035    5361 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501610716s
	I1124 13:15:08.035110    5361 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:15:08.051648    5361 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:15:08.067676    5361 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:15:08.067874    5361 kubeadm.go:319] [mark-control-plane] Marking the node addons-647907 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:15:08.081728    5361 kubeadm.go:319] [bootstrap-token] Using token: bbiljv.00xeqiejrgdkivim
	I1124 13:15:08.084913    5361 out.go:252]   - Configuring RBAC rules ...
	I1124 13:15:08.085052    5361 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:15:08.095628    5361 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:15:08.105344    5361 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:15:08.110466    5361 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:15:08.115893    5361 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:15:08.121252    5361 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:15:08.419431    5361 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:15:08.858608    5361 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:15:09.419140    5361 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:15:09.420327    5361 kubeadm.go:319] 
	I1124 13:15:09.420415    5361 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:15:09.420425    5361 kubeadm.go:319] 
	I1124 13:15:09.420515    5361 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:15:09.420524    5361 kubeadm.go:319] 
	I1124 13:15:09.420553    5361 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:15:09.420621    5361 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:15:09.420686    5361 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:15:09.420693    5361 kubeadm.go:319] 
	I1124 13:15:09.420750    5361 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:15:09.420758    5361 kubeadm.go:319] 
	I1124 13:15:09.420809    5361 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:15:09.420817    5361 kubeadm.go:319] 
	I1124 13:15:09.420875    5361 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:15:09.420959    5361 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:15:09.421059    5361 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:15:09.421069    5361 kubeadm.go:319] 
	I1124 13:15:09.421173    5361 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:15:09.421260    5361 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:15:09.421267    5361 kubeadm.go:319] 
	I1124 13:15:09.421351    5361 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bbiljv.00xeqiejrgdkivim \
	I1124 13:15:09.421458    5361 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f \
	I1124 13:15:09.421483    5361 kubeadm.go:319] 	--control-plane 
	I1124 13:15:09.421498    5361 kubeadm.go:319] 
	I1124 13:15:09.421593    5361 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:15:09.421601    5361 kubeadm.go:319] 
	I1124 13:15:09.421685    5361 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bbiljv.00xeqiejrgdkivim \
	I1124 13:15:09.421795    5361 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f 
	I1124 13:15:09.425415    5361 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 13:15:09.425649    5361 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 13:15:09.425759    5361 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:15:09.425778    5361 cni.go:84] Creating CNI manager for ""
	I1124 13:15:09.425793    5361 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:15:09.428982    5361 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:15:09.432025    5361 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:15:09.436341    5361 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:15:09.436378    5361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:15:09.451139    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:15:09.744097    5361 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:15:09.744248    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:09.744329    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-647907 minikube.k8s.io/updated_at=2025_11_24T13_15_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=addons-647907 minikube.k8s.io/primary=true
	I1124 13:15:09.903276    5361 ops.go:34] apiserver oom_adj: -16
	I1124 13:15:09.903426    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:10.403788    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:10.903501    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:11.404218    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:11.904490    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:12.403517    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:12.903974    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:13.403509    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:13.903526    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:14.052237    5361 kubeadm.go:1114] duration metric: took 4.308032259s to wait for elevateKubeSystemPrivileges
	I1124 13:15:14.052264    5361 kubeadm.go:403] duration metric: took 20.10460896s to StartCluster
	I1124 13:15:14.052280    5361 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:14.052409    5361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 13:15:14.052810    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:14.053003    5361 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:15:14.053150    5361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:15:14.053417    5361 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:15:14.053456    5361 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 13:15:14.053531    5361 addons.go:70] Setting yakd=true in profile "addons-647907"
	I1124 13:15:14.053546    5361 addons.go:239] Setting addon yakd=true in "addons-647907"
	I1124 13:15:14.053568    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.054073    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.054375    5361 addons.go:70] Setting inspektor-gadget=true in profile "addons-647907"
	I1124 13:15:14.054389    5361 addons.go:239] Setting addon inspektor-gadget=true in "addons-647907"
	I1124 13:15:14.054412    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.054821    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.055167    5361 addons.go:70] Setting metrics-server=true in profile "addons-647907"
	I1124 13:15:14.055194    5361 addons.go:239] Setting addon metrics-server=true in "addons-647907"
	I1124 13:15:14.055225    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.055679    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.059477    5361 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-647907"
	I1124 13:15:14.059553    5361 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-647907"
	I1124 13:15:14.059601    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.060110    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.060316    5361 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-647907"
	I1124 13:15:14.060365    5361 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-647907"
	I1124 13:15:14.060402    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.060827    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.067023    5361 addons.go:70] Setting cloud-spanner=true in profile "addons-647907"
	I1124 13:15:14.067049    5361 addons.go:70] Setting registry-creds=true in profile "addons-647907"
	I1124 13:15:14.067064    5361 addons.go:239] Setting addon cloud-spanner=true in "addons-647907"
	I1124 13:15:14.067068    5361 addons.go:239] Setting addon registry-creds=true in "addons-647907"
	I1124 13:15:14.067099    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.067028    5361 addons.go:70] Setting registry=true in profile "addons-647907"
	I1124 13:15:14.067111    5361 addons.go:239] Setting addon registry=true in "addons-647907"
	I1124 13:15:14.067125    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.067649    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.067908    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.077312    5361 addons.go:70] Setting storage-provisioner=true in profile "addons-647907"
	I1124 13:15:14.077358    5361 addons.go:239] Setting addon storage-provisioner=true in "addons-647907"
	I1124 13:15:14.077396    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.077878    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.080428    5361 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-647907"
	I1124 13:15:14.080503    5361 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-647907"
	I1124 13:15:14.080536    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.081005    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.095100    5361 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-647907"
	I1124 13:15:14.095134    5361 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-647907"
	I1124 13:15:14.095494    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.108330    5361 addons.go:70] Setting default-storageclass=true in profile "addons-647907"
	I1124 13:15:14.108371    5361 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-647907"
	I1124 13:15:14.108699    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.123797    5361 addons.go:70] Setting gcp-auth=true in profile "addons-647907"
	I1124 13:15:14.123832    5361 mustload.go:66] Loading cluster: addons-647907
	I1124 13:15:14.124321    5361 addons.go:70] Setting volcano=true in profile "addons-647907"
	I1124 13:15:14.124352    5361 addons.go:239] Setting addon volcano=true in "addons-647907"
	I1124 13:15:14.124400    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.124607    5361 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:15:14.124885    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.128335    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.147401    5361 addons.go:70] Setting ingress=true in profile "addons-647907"
	I1124 13:15:14.147429    5361 addons.go:239] Setting addon ingress=true in "addons-647907"
	I1124 13:15:14.147490    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.148724    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.170988    5361 addons.go:70] Setting volumesnapshots=true in profile "addons-647907"
	I1124 13:15:14.171025    5361 addons.go:239] Setting addon volumesnapshots=true in "addons-647907"
	I1124 13:15:14.171061    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.171570    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.175996    5361 addons.go:70] Setting ingress-dns=true in profile "addons-647907"
	I1124 13:15:14.176025    5361 addons.go:239] Setting addon ingress-dns=true in "addons-647907"
	I1124 13:15:14.176065    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.176570    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.199452    5361 out.go:179] * Verifying Kubernetes components...
	I1124 13:15:14.067099    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.200261    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.207700    5361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:15:14.302029    5361 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 13:15:14.302301    5361 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 13:15:14.302345    5361 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 13:15:14.337839    5361 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 13:15:14.337855    5361 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 13:15:14.337937    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.338498    5361 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 13:15:14.339414    5361 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 13:15:14.339440    5361 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 13:15:14.339517    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.364548    5361 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 13:15:14.364575    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 13:15:14.364639    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.369629    5361 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 13:15:14.372812    5361 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 13:15:14.372885    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 13:15:14.372968    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.389970    5361 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 13:15:14.389990    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 13:15:14.390046    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.401802    5361 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	W1124 13:15:14.402828    5361 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 13:15:14.404572    5361 addons.go:239] Setting addon default-storageclass=true in "addons-647907"
	I1124 13:15:14.407627    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.408053    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.404599    5361 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 13:15:14.405286    5361 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-647907"
	I1124 13:15:14.408581    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.408981    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.438212    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 13:15:14.443044    5361 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 13:15:14.451301    5361 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 13:15:14.451332    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 13:15:14.451429    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.451661    5361 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:15:14.458520    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.460102    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 13:15:14.464268    5361 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 13:15:14.464288    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 13:15:14.464376    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.489336    5361 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:15:14.489359    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:15:14.489418    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.496481    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 13:15:14.501668    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 13:15:14.504702    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 13:15:14.508131    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 13:15:14.510475    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 13:15:14.514751    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 13:15:14.514784    5361 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 13:15:14.514847    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.514993    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 13:15:14.518216    5361 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 13:15:14.522020    5361 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:15:14.533406    5361 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:15:14.539787    5361 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 13:15:14.540004    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 13:15:14.540177    5361 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 13:15:14.579180    5361 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 13:15:14.579265    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 13:15:14.579407    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.580914    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.590912    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.591551    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.602042    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 13:15:14.602116    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 13:15:14.602224    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.623587    5361 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 13:15:14.623609    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 13:15:14.623669    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.632157    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.647170    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.647808    5361 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 13:15:14.647822    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 13:15:14.647893    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.673839    5361 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:15:14.673869    5361 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:15:14.673921    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.697298    5361 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 13:15:14.700300    5361 out.go:179]   - Using image docker.io/busybox:stable
	I1124 13:15:14.706314    5361 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 13:15:14.706343    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 13:15:14.706413    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.731476    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.737170    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.770912    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.775829    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.776948    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.798735    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.809319    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.809893    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	W1124 13:15:14.820313    5361 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 13:15:14.820420    5361 retry.go:31] will retry after 186.99618ms: ssh: handshake failed: EOF
	I1124 13:15:14.832821    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.835555    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	W1124 13:15:15.013465    5361 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 13:15:15.013549    5361 retry.go:31] will retry after 561.464476ms: ssh: handshake failed: EOF
	I1124 13:15:15.035418    5361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:15:15.035802    5361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:15:15.276497    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:15:15.305164    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:15:15.331878    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 13:15:15.356054    5361 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 13:15:15.356123    5361 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 13:15:15.358599    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 13:15:15.370716    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 13:15:15.377818    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 13:15:15.398990    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 13:15:15.414875    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 13:15:15.414947    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 13:15:15.472043    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 13:15:15.497878    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 13:15:15.510503    5361 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 13:15:15.510575    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 13:15:15.519330    5361 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 13:15:15.519403    5361 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 13:15:15.525540    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 13:15:15.532444    5361 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 13:15:15.532519    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 13:15:15.691376    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 13:15:15.691402    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 13:15:15.705230    5361 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 13:15:15.705261    5361 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 13:15:15.738045    5361 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 13:15:15.738070    5361 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 13:15:15.742207    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 13:15:15.853148    5361 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 13:15:15.853174    5361 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 13:15:15.859317    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 13:15:15.859393    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 13:15:15.882748    5361 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 13:15:15.882777    5361 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 13:15:16.035409    5361 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 13:15:16.035432    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 13:15:16.077607    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 13:15:16.077632    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 13:15:16.097756    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 13:15:16.245640    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 13:15:16.301538    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 13:15:16.301565    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 13:15:16.365916    5361 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 13:15:16.365942    5361 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 13:15:16.553541    5361 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 13:15:16.553616    5361 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 13:15:16.600584    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 13:15:16.600653    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 13:15:16.676275    5361 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 13:15:16.676357    5361 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 13:15:16.906225    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 13:15:16.906304    5361 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 13:15:16.957563    5361 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.922064095s)
	I1124 13:15:16.958331    5361 node_ready.go:35] waiting up to 6m0s for node "addons-647907" to be "Ready" ...
	I1124 13:15:16.958581    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.682053455s)
	I1124 13:15:16.958750    5361 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.922895766s)
	I1124 13:15:16.958789    5361 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 13:15:16.981946    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 13:15:16.982019    5361 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 13:15:17.156977    5361 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:15:17.157045    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 13:15:17.185169    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 13:15:17.185241    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 13:15:17.252769    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:15:17.273692    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 13:15:17.273762    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 13:15:17.455554    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 13:15:17.455631    5361 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 13:15:17.478243    5361 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-647907" context rescaled to 1 replicas
	I1124 13:15:17.655241    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 13:15:18.630107    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.324861057s)
	I1124 13:15:18.630256    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.298299895s)
	I1124 13:15:18.630314    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.271649265s)
	I1124 13:15:18.630373    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.259600555s)
	W1124 13:15:18.968215    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:19.895371    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.517498125s)
	I1124 13:15:19.895455    5361 addons.go:495] Verifying addon ingress=true in "addons-647907"
	I1124 13:15:19.895657    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.496597533s)
	I1124 13:15:19.895883    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.423768163s)
	I1124 13:15:19.895917    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.397972703s)
	I1124 13:15:19.895985    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.370386588s)
	I1124 13:15:19.896011    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.153780143s)
	I1124 13:15:19.896017    5361 addons.go:495] Verifying addon registry=true in "addons-647907"
	I1124 13:15:19.896247    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.798464587s)
	I1124 13:15:19.896418    5361 addons.go:495] Verifying addon metrics-server=true in "addons-647907"
	I1124 13:15:19.896488    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.650819434s)
	I1124 13:15:19.899433    5361 out.go:179] * Verifying registry addon...
	I1124 13:15:19.899512    5361 out.go:179] * Verifying ingress addon...
	I1124 13:15:19.899552    5361 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-647907 service yakd-dashboard -n yakd-dashboard
	
	I1124 13:15:19.903901    5361 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 13:15:19.904763    5361 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 13:15:19.942026    5361 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 13:15:19.942046    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:19.942073    5361 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 13:15:19.942084    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:20.110282    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.857424147s)
	W1124 13:15:20.110320    5361 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 13:15:20.110364    5361 retry.go:31] will retry after 181.086691ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 13:15:20.291984    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:15:20.374698    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.719355747s)
	I1124 13:15:20.374735    5361 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-647907"
	I1124 13:15:20.377998    5361 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 13:15:20.381576    5361 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 13:15:20.395322    5361 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 13:15:20.395346    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:20.496480    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:20.496610    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:20.885591    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:20.908633    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:20.909399    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:21.385726    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:21.408094    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:21.408436    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:21.462390    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:21.885253    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:21.908665    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:21.908715    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:22.070519    5361 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 13:15:22.070605    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:22.088164    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:22.205216    5361 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 13:15:22.217381    5361 addons.go:239] Setting addon gcp-auth=true in "addons-647907"
	I1124 13:15:22.217437    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:22.217879    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:22.234571    5361 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 13:15:22.234623    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:22.252244    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:22.385413    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:22.406979    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:22.407316    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:22.885585    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:22.910598    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:22.912674    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:23.116587    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.824552839s)
	I1124 13:15:23.119808    5361 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:15:23.122761    5361 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 13:15:23.125534    5361 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 13:15:23.125554    5361 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 13:15:23.138119    5361 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 13:15:23.138178    5361 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 13:15:23.151188    5361 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 13:15:23.151252    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 13:15:23.163134    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 13:15:23.385933    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:23.407627    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:23.409283    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:23.625159    5361 addons.go:495] Verifying addon gcp-auth=true in "addons-647907"
	I1124 13:15:23.628312    5361 out.go:179] * Verifying gcp-auth addon...
	I1124 13:15:23.631820    5361 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 13:15:23.641236    5361 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 13:15:23.641306    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:23.885817    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:23.907993    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:23.908063    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1124 13:15:23.961969    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:24.134943    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:24.384803    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:24.406721    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:24.409324    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:24.635216    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:24.885684    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:24.908098    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:24.908243    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:25.134945    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:25.384950    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:25.406921    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:25.407683    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:25.635432    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:25.884379    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:25.907591    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:25.908851    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:26.135345    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:26.385333    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:26.406773    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:26.407678    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:26.461344    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:26.635523    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:26.884682    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:26.907670    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:26.907816    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:27.134946    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:27.384904    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:27.406820    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:27.407583    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:27.635149    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:27.885344    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:27.907981    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:27.908922    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:28.135313    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:28.385296    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:28.407272    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:28.408031    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:28.461617    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:28.635559    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:28.884662    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:28.907668    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:28.907803    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:29.135399    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:29.385184    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:29.406692    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:29.407755    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:29.635139    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:29.884978    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:29.907248    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:29.907895    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:30.135470    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:30.384235    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:30.407075    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:30.407424    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:30.635452    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:30.885560    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:30.907399    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:30.907708    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:30.961448    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:31.135470    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:31.385196    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:31.406865    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:31.408076    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:31.634524    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:31.885186    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:31.907605    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:31.908156    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:32.135378    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:32.384307    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:32.406793    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:32.407644    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:32.635146    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:32.887701    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:32.908345    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:32.908542    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:33.135129    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:33.384713    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:33.407970    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:33.408063    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:33.461658    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:33.635408    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:33.885243    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:33.908388    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:33.908942    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:34.135431    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:34.385312    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:34.406721    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:34.407926    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:34.635426    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:34.885799    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:34.907799    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:34.908061    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:35.136248    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:35.385254    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:35.407135    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:35.408856    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:35.635396    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:35.884279    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:35.908650    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:35.908785    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:35.961432    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:36.135317    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:36.385137    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:36.406775    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:36.407805    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:36.635689    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:36.884617    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:36.907888    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:36.908372    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:37.134814    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:37.384598    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:37.408479    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:37.408911    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:37.634946    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:37.884857    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:37.907110    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:37.908493    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:38.135226    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:38.385152    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:38.407624    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:38.408090    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:38.461727    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:38.635317    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:38.884582    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:38.908783    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:38.908949    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:39.134691    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:39.385096    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:39.406854    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:39.407268    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:39.634588    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:39.884328    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:39.906861    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:39.908214    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:40.135549    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:40.384316    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:40.407910    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:40.408075    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:40.635235    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:40.885837    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:40.907117    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:40.907600    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:40.961409    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:41.135026    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:41.385148    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:41.406962    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:41.409502    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:41.634732    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:41.884246    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:41.907196    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:41.908846    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:42.136671    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:42.384712    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:42.407791    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:42.407928    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:42.635313    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:42.886071    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:42.908549    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:42.916324    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:42.963090    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:43.135145    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:43.385192    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:43.406733    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:43.407877    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:43.635175    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:43.885028    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:43.908263    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:43.908380    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:44.135075    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:44.385429    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:44.407295    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:44.408690    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:44.635213    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:44.884906    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:44.906868    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:44.908088    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:45.145899    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:45.385095    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:45.407336    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:45.407471    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1124 13:15:45.461049    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:45.635022    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:45.885012    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:45.907859    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:45.908889    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:46.135432    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:46.384468    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:46.408588    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:46.409427    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:46.635121    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:46.885669    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:46.908075    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:46.908409    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:47.135679    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:47.384968    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:47.406787    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:47.407239    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:47.461901    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:47.635150    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:47.885311    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:47.908853    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:47.909239    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:48.134719    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:48.384815    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:48.408553    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:48.408849    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:48.635066    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:48.884981    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:48.908187    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:48.908472    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:49.134823    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:49.384890    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:49.406768    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:49.407524    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:49.635017    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:49.890230    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:49.907496    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:49.908771    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:49.961505    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:50.135505    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:50.385406    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:50.407673    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:50.407826    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:50.635532    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:50.884180    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:50.907440    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:50.908703    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:51.135466    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:51.384553    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:51.407738    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:51.408015    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:51.634770    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:51.884792    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:51.907456    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:51.907974    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:51.961684    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:52.135493    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:52.384331    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:52.406971    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:52.407886    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:52.634445    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:52.885210    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:52.906953    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:52.909489    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:53.134947    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:53.385055    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:53.407599    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:53.408029    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:53.634574    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:53.885539    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:53.907576    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:53.907730    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:54.135455    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:54.385156    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:54.406970    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:54.407587    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:54.462420    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:54.635281    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:54.905066    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:54.923691    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:54.924098    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:54.985496    5361 node_ready.go:49] node "addons-647907" is "Ready"
	I1124 13:15:54.985529    5361 node_ready.go:38] duration metric: took 38.027131264s for node "addons-647907" to be "Ready" ...
	I1124 13:15:54.985544    5361 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:15:54.985608    5361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:15:55.007708    5361 api_server.go:72] duration metric: took 40.954676167s to wait for apiserver process to appear ...
	I1124 13:15:55.007743    5361 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:15:55.007786    5361 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 13:15:55.032974    5361 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 13:15:55.035570    5361 api_server.go:141] control plane version: v1.34.1
	I1124 13:15:55.035693    5361 api_server.go:131] duration metric: took 27.940527ms to wait for apiserver health ...
	I1124 13:15:55.035722    5361 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:15:55.056401    5361 system_pods.go:59] 19 kube-system pods found
	I1124 13:15:55.056484    5361 system_pods.go:61] "coredns-66bc5c9577-hhndw" [65c642fe-4303-4dda-b00c-9892237a778c] Pending
	I1124 13:15:55.056506    5361 system_pods.go:61] "csi-hostpath-attacher-0" [71395414-4872-496e-9036-28fa99a9b658] Pending
	I1124 13:15:55.056526    5361 system_pods.go:61] "csi-hostpath-resizer-0" [279dab72-076d-46b9-90f4-1d69768633da] Pending
	I1124 13:15:55.056563    5361 system_pods.go:61] "csi-hostpathplugin-89nqp" [933c89cf-7b06-45b0-a9b4-be0b5f3b3bde] Pending
	I1124 13:15:55.056583    5361 system_pods.go:61] "etcd-addons-647907" [876e6603-49e6-44f6-9331-ed2a53b6657b] Running
	I1124 13:15:55.056603    5361 system_pods.go:61] "kindnet-cq7x5" [d7365392-f359-4f14-93e6-ae671834849d] Running
	I1124 13:15:55.056636    5361 system_pods.go:61] "kube-apiserver-addons-647907" [af27e5b5-2033-4c23-95d2-39ec0194646d] Running
	I1124 13:15:55.056662    5361 system_pods.go:61] "kube-controller-manager-addons-647907" [132a9fce-2239-4973-a9e3-3af445872d21] Running
	I1124 13:15:55.056686    5361 system_pods.go:61] "kube-ingress-dns-minikube" [8bade7cb-4229-4fda-9216-6adefd2a920f] Pending
	I1124 13:15:55.056721    5361 system_pods.go:61] "kube-proxy-n8mpw" [56957e05-4c86-4594-937e-b547b9ffdb86] Running
	I1124 13:15:55.056743    5361 system_pods.go:61] "kube-scheduler-addons-647907" [e8469bdd-2a1b-4d5a-932b-866e717756f2] Running
	I1124 13:15:55.056764    5361 system_pods.go:61] "metrics-server-85b7d694d7-xlsnf" [1ef721be-841d-4a36-949e-55c1b518e346] Pending
	I1124 13:15:55.056800    5361 system_pods.go:61] "nvidia-device-plugin-daemonset-dn469" [983203db-3631-4a06-80b8-418beda496e4] Pending
	I1124 13:15:55.056823    5361 system_pods.go:61] "registry-6b586f9694-2l7kt" [9c4c505b-2065-4059-a0a8-66ae39acca38] Pending
	I1124 13:15:55.056843    5361 system_pods.go:61] "registry-creds-764b6fb674-pf26d" [eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6] Pending
	I1124 13:15:55.056876    5361 system_pods.go:61] "registry-proxy-9hgsb" [fe9ffd77-b342-49e0-b2bb-e1634ce62247] Pending
	I1124 13:15:55.056899    5361 system_pods.go:61] "snapshot-controller-7d9fbc56b8-49v4w" [dc47a8db-7218-447d-b1a6-4534add69c8f] Pending
	I1124 13:15:55.056919    5361 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qnq5w" [112f4619-ab05-454a-bb09-7bce99c81b00] Pending
	I1124 13:15:55.056956    5361 system_pods.go:61] "storage-provisioner" [cc0f47a2-043c-4b50-bc22-c33673a0ea35] Pending
	I1124 13:15:55.056981    5361 system_pods.go:74] duration metric: took 21.23853ms to wait for pod list to return data ...
	I1124 13:15:55.057042    5361 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:15:55.081341    5361 default_sa.go:45] found service account: "default"
	I1124 13:15:55.081418    5361 default_sa.go:55] duration metric: took 24.351747ms for default service account to be created ...
	I1124 13:15:55.081442    5361 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:15:55.098846    5361 system_pods.go:86] 19 kube-system pods found
	I1124 13:15:55.098932    5361 system_pods.go:89] "coredns-66bc5c9577-hhndw" [65c642fe-4303-4dda-b00c-9892237a778c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:55.098956    5361 system_pods.go:89] "csi-hostpath-attacher-0" [71395414-4872-496e-9036-28fa99a9b658] Pending
	I1124 13:15:55.099016    5361 system_pods.go:89] "csi-hostpath-resizer-0" [279dab72-076d-46b9-90f4-1d69768633da] Pending
	I1124 13:15:55.099041    5361 system_pods.go:89] "csi-hostpathplugin-89nqp" [933c89cf-7b06-45b0-a9b4-be0b5f3b3bde] Pending
	I1124 13:15:55.099065    5361 system_pods.go:89] "etcd-addons-647907" [876e6603-49e6-44f6-9331-ed2a53b6657b] Running
	I1124 13:15:55.099100    5361 system_pods.go:89] "kindnet-cq7x5" [d7365392-f359-4f14-93e6-ae671834849d] Running
	I1124 13:15:55.099126    5361 system_pods.go:89] "kube-apiserver-addons-647907" [af27e5b5-2033-4c23-95d2-39ec0194646d] Running
	I1124 13:15:55.099146    5361 system_pods.go:89] "kube-controller-manager-addons-647907" [132a9fce-2239-4973-a9e3-3af445872d21] Running
	I1124 13:15:55.099185    5361 system_pods.go:89] "kube-ingress-dns-minikube" [8bade7cb-4229-4fda-9216-6adefd2a920f] Pending
	I1124 13:15:55.099208    5361 system_pods.go:89] "kube-proxy-n8mpw" [56957e05-4c86-4594-937e-b547b9ffdb86] Running
	I1124 13:15:55.099232    5361 system_pods.go:89] "kube-scheduler-addons-647907" [e8469bdd-2a1b-4d5a-932b-866e717756f2] Running
	I1124 13:15:55.099266    5361 system_pods.go:89] "metrics-server-85b7d694d7-xlsnf" [1ef721be-841d-4a36-949e-55c1b518e346] Pending
	I1124 13:15:55.099291    5361 system_pods.go:89] "nvidia-device-plugin-daemonset-dn469" [983203db-3631-4a06-80b8-418beda496e4] Pending
	I1124 13:15:55.099315    5361 system_pods.go:89] "registry-6b586f9694-2l7kt" [9c4c505b-2065-4059-a0a8-66ae39acca38] Pending
	I1124 13:15:55.099348    5361 system_pods.go:89] "registry-creds-764b6fb674-pf26d" [eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6] Pending
	I1124 13:15:55.099404    5361 system_pods.go:89] "registry-proxy-9hgsb" [fe9ffd77-b342-49e0-b2bb-e1634ce62247] Pending
	I1124 13:15:55.099438    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49v4w" [dc47a8db-7218-447d-b1a6-4534add69c8f] Pending
	I1124 13:15:55.099459    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qnq5w" [112f4619-ab05-454a-bb09-7bce99c81b00] Pending
	I1124 13:15:55.099480    5361 system_pods.go:89] "storage-provisioner" [cc0f47a2-043c-4b50-bc22-c33673a0ea35] Pending
	I1124 13:15:55.099522    5361 retry.go:31] will retry after 306.124483ms: missing components: kube-dns
	I1124 13:15:55.155384    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:55.400825    5361 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 13:15:55.400896    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:55.425835    5361 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 13:15:55.425903    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:55.426201    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:55.442348    5361 system_pods.go:86] 19 kube-system pods found
	I1124 13:15:55.442434    5361 system_pods.go:89] "coredns-66bc5c9577-hhndw" [65c642fe-4303-4dda-b00c-9892237a778c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:55.442457    5361 system_pods.go:89] "csi-hostpath-attacher-0" [71395414-4872-496e-9036-28fa99a9b658] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:55.442499    5361 system_pods.go:89] "csi-hostpath-resizer-0" [279dab72-076d-46b9-90f4-1d69768633da] Pending
	I1124 13:15:55.442524    5361 system_pods.go:89] "csi-hostpathplugin-89nqp" [933c89cf-7b06-45b0-a9b4-be0b5f3b3bde] Pending
	I1124 13:15:55.442546    5361 system_pods.go:89] "etcd-addons-647907" [876e6603-49e6-44f6-9331-ed2a53b6657b] Running
	I1124 13:15:55.442583    5361 system_pods.go:89] "kindnet-cq7x5" [d7365392-f359-4f14-93e6-ae671834849d] Running
	I1124 13:15:55.442607    5361 system_pods.go:89] "kube-apiserver-addons-647907" [af27e5b5-2033-4c23-95d2-39ec0194646d] Running
	I1124 13:15:55.442626    5361 system_pods.go:89] "kube-controller-manager-addons-647907" [132a9fce-2239-4973-a9e3-3af445872d21] Running
	I1124 13:15:55.442671    5361 system_pods.go:89] "kube-ingress-dns-minikube" [8bade7cb-4229-4fda-9216-6adefd2a920f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:55.442695    5361 system_pods.go:89] "kube-proxy-n8mpw" [56957e05-4c86-4594-937e-b547b9ffdb86] Running
	I1124 13:15:55.442721    5361 system_pods.go:89] "kube-scheduler-addons-647907" [e8469bdd-2a1b-4d5a-932b-866e717756f2] Running
	I1124 13:15:55.442755    5361 system_pods.go:89] "metrics-server-85b7d694d7-xlsnf" [1ef721be-841d-4a36-949e-55c1b518e346] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:55.442779    5361 system_pods.go:89] "nvidia-device-plugin-daemonset-dn469" [983203db-3631-4a06-80b8-418beda496e4] Pending
	I1124 13:15:55.442803    5361 system_pods.go:89] "registry-6b586f9694-2l7kt" [9c4c505b-2065-4059-a0a8-66ae39acca38] Pending
	I1124 13:15:55.442842    5361 system_pods.go:89] "registry-creds-764b6fb674-pf26d" [eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:55.442869    5361 system_pods.go:89] "registry-proxy-9hgsb" [fe9ffd77-b342-49e0-b2bb-e1634ce62247] Pending
	I1124 13:15:55.442893    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49v4w" [dc47a8db-7218-447d-b1a6-4534add69c8f] Pending
	I1124 13:15:55.442931    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qnq5w" [112f4619-ab05-454a-bb09-7bce99c81b00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:55.442955    5361 system_pods.go:89] "storage-provisioner" [cc0f47a2-043c-4b50-bc22-c33673a0ea35] Pending
	I1124 13:15:55.442988    5361 retry.go:31] will retry after 249.933697ms: missing components: kube-dns
	I1124 13:15:55.639893    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:55.705965    5361 system_pods.go:86] 19 kube-system pods found
	I1124 13:15:55.706054    5361 system_pods.go:89] "coredns-66bc5c9577-hhndw" [65c642fe-4303-4dda-b00c-9892237a778c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:55.706081    5361 system_pods.go:89] "csi-hostpath-attacher-0" [71395414-4872-496e-9036-28fa99a9b658] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:55.706121    5361 system_pods.go:89] "csi-hostpath-resizer-0" [279dab72-076d-46b9-90f4-1d69768633da] Pending
	I1124 13:15:55.706149    5361 system_pods.go:89] "csi-hostpathplugin-89nqp" [933c89cf-7b06-45b0-a9b4-be0b5f3b3bde] Pending
	I1124 13:15:55.706174    5361 system_pods.go:89] "etcd-addons-647907" [876e6603-49e6-44f6-9331-ed2a53b6657b] Running
	I1124 13:15:55.706214    5361 system_pods.go:89] "kindnet-cq7x5" [d7365392-f359-4f14-93e6-ae671834849d] Running
	I1124 13:15:55.706241    5361 system_pods.go:89] "kube-apiserver-addons-647907" [af27e5b5-2033-4c23-95d2-39ec0194646d] Running
	I1124 13:15:55.706265    5361 system_pods.go:89] "kube-controller-manager-addons-647907" [132a9fce-2239-4973-a9e3-3af445872d21] Running
	I1124 13:15:55.706304    5361 system_pods.go:89] "kube-ingress-dns-minikube" [8bade7cb-4229-4fda-9216-6adefd2a920f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:55.706330    5361 system_pods.go:89] "kube-proxy-n8mpw" [56957e05-4c86-4594-937e-b547b9ffdb86] Running
	I1124 13:15:55.706355    5361 system_pods.go:89] "kube-scheduler-addons-647907" [e8469bdd-2a1b-4d5a-932b-866e717756f2] Running
	I1124 13:15:55.706388    5361 system_pods.go:89] "metrics-server-85b7d694d7-xlsnf" [1ef721be-841d-4a36-949e-55c1b518e346] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:55.706415    5361 system_pods.go:89] "nvidia-device-plugin-daemonset-dn469" [983203db-3631-4a06-80b8-418beda496e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 13:15:55.706438    5361 system_pods.go:89] "registry-6b586f9694-2l7kt" [9c4c505b-2065-4059-a0a8-66ae39acca38] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:55.706477    5361 system_pods.go:89] "registry-creds-764b6fb674-pf26d" [eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:55.706502    5361 system_pods.go:89] "registry-proxy-9hgsb" [fe9ffd77-b342-49e0-b2bb-e1634ce62247] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 13:15:55.706526    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49v4w" [dc47a8db-7218-447d-b1a6-4534add69c8f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:55.706565    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qnq5w" [112f4619-ab05-454a-bb09-7bce99c81b00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:55.706591    5361 system_pods.go:89] "storage-provisioner" [cc0f47a2-043c-4b50-bc22-c33673a0ea35] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:15:55.706637    5361 retry.go:31] will retry after 365.091725ms: missing components: kube-dns
	I1124 13:15:55.886783    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:55.989028    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:55.989349    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:56.097669    5361 system_pods.go:86] 19 kube-system pods found
	I1124 13:15:56.097761    5361 system_pods.go:89] "coredns-66bc5c9577-hhndw" [65c642fe-4303-4dda-b00c-9892237a778c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:56.097786    5361 system_pods.go:89] "csi-hostpath-attacher-0" [71395414-4872-496e-9036-28fa99a9b658] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:56.097827    5361 system_pods.go:89] "csi-hostpath-resizer-0" [279dab72-076d-46b9-90f4-1d69768633da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 13:15:56.097854    5361 system_pods.go:89] "csi-hostpathplugin-89nqp" [933c89cf-7b06-45b0-a9b4-be0b5f3b3bde] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 13:15:56.097878    5361 system_pods.go:89] "etcd-addons-647907" [876e6603-49e6-44f6-9331-ed2a53b6657b] Running
	I1124 13:15:56.097913    5361 system_pods.go:89] "kindnet-cq7x5" [d7365392-f359-4f14-93e6-ae671834849d] Running
	I1124 13:15:56.097936    5361 system_pods.go:89] "kube-apiserver-addons-647907" [af27e5b5-2033-4c23-95d2-39ec0194646d] Running
	I1124 13:15:56.097956    5361 system_pods.go:89] "kube-controller-manager-addons-647907" [132a9fce-2239-4973-a9e3-3af445872d21] Running
	I1124 13:15:56.097994    5361 system_pods.go:89] "kube-ingress-dns-minikube" [8bade7cb-4229-4fda-9216-6adefd2a920f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:56.098016    5361 system_pods.go:89] "kube-proxy-n8mpw" [56957e05-4c86-4594-937e-b547b9ffdb86] Running
	I1124 13:15:56.098039    5361 system_pods.go:89] "kube-scheduler-addons-647907" [e8469bdd-2a1b-4d5a-932b-866e717756f2] Running
	I1124 13:15:56.098077    5361 system_pods.go:89] "metrics-server-85b7d694d7-xlsnf" [1ef721be-841d-4a36-949e-55c1b518e346] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:56.098102    5361 system_pods.go:89] "nvidia-device-plugin-daemonset-dn469" [983203db-3631-4a06-80b8-418beda496e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 13:15:56.098125    5361 system_pods.go:89] "registry-6b586f9694-2l7kt" [9c4c505b-2065-4059-a0a8-66ae39acca38] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:56.098163    5361 system_pods.go:89] "registry-creds-764b6fb674-pf26d" [eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:56.098190    5361 system_pods.go:89] "registry-proxy-9hgsb" [fe9ffd77-b342-49e0-b2bb-e1634ce62247] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 13:15:56.098216    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49v4w" [dc47a8db-7218-447d-b1a6-4534add69c8f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:56.098251    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qnq5w" [112f4619-ab05-454a-bb09-7bce99c81b00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:56.098276    5361 system_pods.go:89] "storage-provisioner" [cc0f47a2-043c-4b50-bc22-c33673a0ea35] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:15:56.098303    5361 system_pods.go:126] duration metric: took 1.016840894s to wait for k8s-apps to be running ...
	I1124 13:15:56.098342    5361 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:15:56.098429    5361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:15:56.188224    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:56.245554    5361 system_svc.go:56] duration metric: took 147.204534ms WaitForService to wait for kubelet
	I1124 13:15:56.245630    5361 kubeadm.go:587] duration metric: took 42.192603294s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:15:56.245661    5361 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:15:56.248950    5361 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 13:15:56.249046    5361 node_conditions.go:123] node cpu capacity is 2
	I1124 13:15:56.249074    5361 node_conditions.go:105] duration metric: took 3.389732ms to run NodePressure ...
	I1124 13:15:56.249119    5361 start.go:242] waiting for startup goroutines ...
	I1124 13:15:56.385391    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:56.406985    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:56.409080    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:56.634857    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:56.885225    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:56.908233    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:56.908957    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:57.136065    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:57.385744    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:57.409979    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:57.410120    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:57.635697    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:57.884896    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:57.909186    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:57.909381    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:58.135299    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:58.386248    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:58.408192    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:58.408987    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:58.634884    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:58.885886    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:58.909681    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:58.910231    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:59.137379    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:59.385182    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:59.409179    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:59.410085    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:59.639349    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:59.887544    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:59.920239    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:59.920492    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:00.141501    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:00.398635    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:00.423558    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:00.438617    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:00.638807    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:00.885261    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:00.913496    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:00.913754    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:01.137921    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:01.385792    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:01.408795    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:01.409195    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:01.652803    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:01.885805    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:01.908750    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:01.909385    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:02.135181    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:02.386152    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:02.408520    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:02.408805    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:02.635526    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:02.886876    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:02.909248    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:02.911486    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:03.135525    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:03.384984    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:03.408593    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:03.409073    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:03.635326    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:03.885402    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:03.908598    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:03.909465    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:04.136020    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:04.385786    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:04.409413    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:04.409535    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:04.635775    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:04.885394    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:04.908436    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:04.908656    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:05.136096    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:05.386229    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:05.409809    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:05.410492    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:05.635855    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:05.885046    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:05.915169    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:05.916124    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:06.134796    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:06.385751    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:06.407996    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:06.408182    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:06.635480    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:06.884919    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:06.907967    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:06.908180    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:07.135771    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:07.386227    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:07.408513    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:07.409192    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:07.635560    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:07.885462    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:07.907456    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:07.908017    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:08.135096    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:08.385013    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:08.408924    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:08.409023    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:08.634702    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:08.886529    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:08.908892    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:08.909014    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:09.135284    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:09.386124    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:09.407932    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:09.409321    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:09.635401    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:09.885195    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:09.907259    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:09.909722    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:10.136068    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:10.388438    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:10.487061    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:10.488177    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:10.634891    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:10.885707    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:10.986291    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:10.986452    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:11.135344    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:11.385800    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:11.408819    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:11.408961    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:11.635874    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:11.885851    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:11.908105    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:11.910148    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:12.135285    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:12.386052    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:12.486881    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:12.487247    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:12.635056    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:12.885350    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:12.913340    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:12.913426    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:13.135787    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:13.385561    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:13.408794    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:13.409696    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:13.635275    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:13.886103    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:13.912223    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:13.912591    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:14.135334    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:14.384773    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:14.418958    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:14.420895    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:14.635563    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:14.888138    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:14.920965    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:14.926014    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:15.135383    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:15.385482    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:15.408988    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:15.409608    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:15.638179    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:15.886080    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:15.910829    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:15.911956    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:16.134963    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:16.385653    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:16.416419    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:16.416810    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:16.635815    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:16.885728    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:16.908157    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:16.910448    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:17.136162    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:17.385909    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:17.409068    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:17.409447    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:17.635507    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:17.885707    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:17.916787    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:17.917074    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:18.135323    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:18.386282    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:18.408331    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:18.409868    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:18.635228    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:18.887281    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:18.914037    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:18.922433    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:19.135754    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:19.385527    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:19.409711    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:19.409784    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:19.635459    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:19.885334    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:19.910170    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:19.910575    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:20.136565    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:20.386255    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:20.409690    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:20.410564    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:20.640717    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:20.885377    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:20.908677    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:20.910333    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:21.135723    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:21.385845    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:21.409850    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:21.410224    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:21.635892    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:21.887867    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:21.910309    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:21.910634    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:22.137888    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:22.394894    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:22.409459    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:22.409757    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:22.635171    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:22.891036    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:22.909296    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:22.909674    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:23.134692    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:23.386508    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:23.487841    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:23.488021    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:23.635278    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:23.886938    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:23.910432    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:23.910618    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:24.134572    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:24.385593    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:24.409335    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:24.409717    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:24.636355    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:24.887320    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:24.910267    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:24.910359    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:25.135246    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:25.385601    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:25.409233    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:25.409576    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:25.635765    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:25.885053    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:25.911875    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:25.912795    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:26.135043    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:26.385596    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:26.407411    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:26.408006    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:26.635273    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:26.886392    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:26.910225    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:26.910603    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:27.135996    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:27.385860    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:27.408356    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:27.409276    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:27.635943    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:27.887806    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:27.911936    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:27.912436    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:28.136670    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:28.385850    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:28.408253    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:28.408341    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:28.636199    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:28.885932    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:28.908680    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:28.910178    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:29.135814    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:29.385106    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:29.409497    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:29.410369    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:29.635634    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:29.884947    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:29.914586    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:29.914694    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:30.139176    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:30.388263    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:30.408324    5361 kapi.go:107] duration metric: took 1m10.504422616s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 13:16:30.408508    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:30.635478    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:30.888421    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:30.908410    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:31.135528    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:31.385739    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:31.408205    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:31.635316    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:31.885232    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:31.909169    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:32.135183    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:32.386139    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:32.408119    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:32.635869    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:32.885058    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:32.909910    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:33.135348    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:33.385579    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:33.409059    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:33.634765    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:33.885519    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:33.908910    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:34.135232    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:34.385882    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:34.413298    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:34.635720    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:34.884850    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:34.907579    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:35.135512    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:35.385073    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:35.408028    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:35.636059    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:35.885702    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:35.913016    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:36.135065    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:36.385553    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:36.409496    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:36.635876    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:36.887400    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:36.909294    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:37.135520    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:37.385175    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:37.408018    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:37.635310    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:37.891454    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:37.910788    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:38.135490    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:38.393444    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:38.420822    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:38.636390    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:38.885694    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:38.908533    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:39.134872    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:39.388151    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:39.408203    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:39.635795    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:39.885324    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:39.913152    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:40.135564    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:40.395841    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:40.417441    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:40.637930    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:40.889633    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:40.912396    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:41.135091    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:41.385404    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:41.408107    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:41.634710    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:41.885462    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:41.909586    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:42.136499    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:42.385447    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:42.408655    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:42.635963    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:42.890227    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:42.909105    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:43.135554    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:43.386119    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:43.408418    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:43.637189    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:43.886115    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:43.909560    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:44.135528    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:44.385245    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:44.409093    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:44.636155    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:44.886169    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:44.912840    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:45.135472    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:45.386114    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:45.408208    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:45.635533    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:45.884846    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:45.907896    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:46.136003    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:46.385817    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:46.409614    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:46.641770    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:46.887693    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:46.987923    5361 kapi.go:107] duration metric: took 1m27.083155964s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 13:16:47.134625    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:47.385138    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:47.635613    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:47.890609    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:48.208454    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:48.386570    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:48.636326    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:48.885906    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:49.135491    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:49.385464    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:49.636053    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:49.884785    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:50.139035    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:50.385697    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:50.634945    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:50.885891    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:51.136023    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:51.386156    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:51.635778    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:51.885657    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:52.134684    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:52.384886    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:52.635302    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:52.886270    5361 kapi.go:107] duration metric: took 1m32.504694513s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 13:16:53.139108    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:53.634909    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:54.138645    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:54.635400    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:55.135446    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:55.634975    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:56.135128    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:56.635306    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:57.135450    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:57.634775    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:58.135327    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:58.635176    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:59.134895    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:59.635465    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:00.135343    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:00.634961    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:01.136563    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:01.635852    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:02.138307    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:02.637593    5361 kapi.go:107] duration metric: took 1m39.005772808s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 13:17:02.638960    5361 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-647907 cluster.
	I1124 13:17:02.640342    5361 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 13:17:02.641588    5361 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 13:17:02.642953    5361 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, ingress-dns, cloud-spanner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1124 13:17:02.644419    5361 addons.go:530] duration metric: took 1m48.590954305s for enable addons: enabled=[default-storageclass storage-provisioner ingress-dns cloud-spanner registry-creds nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1124 13:17:02.644486    5361 start.go:247] waiting for cluster config update ...
	I1124 13:17:02.644515    5361 start.go:256] writing updated cluster config ...
	I1124 13:17:02.644827    5361 ssh_runner.go:195] Run: rm -f paused
	I1124 13:17:02.650565    5361 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:17:02.654483    5361 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hhndw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.660920    5361 pod_ready.go:94] pod "coredns-66bc5c9577-hhndw" is "Ready"
	I1124 13:17:02.660953    5361 pod_ready.go:86] duration metric: took 6.435155ms for pod "coredns-66bc5c9577-hhndw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.663832    5361 pod_ready.go:83] waiting for pod "etcd-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.669824    5361 pod_ready.go:94] pod "etcd-addons-647907" is "Ready"
	I1124 13:17:02.669859    5361 pod_ready.go:86] duration metric: took 5.99516ms for pod "etcd-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.672774    5361 pod_ready.go:83] waiting for pod "kube-apiserver-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.678187    5361 pod_ready.go:94] pod "kube-apiserver-addons-647907" is "Ready"
	I1124 13:17:02.678262    5361 pod_ready.go:86] duration metric: took 5.458196ms for pod "kube-apiserver-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.681172    5361 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:03.055754    5361 pod_ready.go:94] pod "kube-controller-manager-addons-647907" is "Ready"
	I1124 13:17:03.055779    5361 pod_ready.go:86] duration metric: took 374.579075ms for pod "kube-controller-manager-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:03.256655    5361 pod_ready.go:83] waiting for pod "kube-proxy-n8mpw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:03.655143    5361 pod_ready.go:94] pod "kube-proxy-n8mpw" is "Ready"
	I1124 13:17:03.655217    5361 pod_ready.go:86] duration metric: took 398.510326ms for pod "kube-proxy-n8mpw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:03.856295    5361 pod_ready.go:83] waiting for pod "kube-scheduler-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:04.255020    5361 pod_ready.go:94] pod "kube-scheduler-addons-647907" is "Ready"
	I1124 13:17:04.255048    5361 pod_ready.go:86] duration metric: took 398.714947ms for pod "kube-scheduler-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:04.255062    5361 pod_ready.go:40] duration metric: took 1.604466409s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:17:04.728598    5361 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 13:17:04.731032    5361 out.go:179] * Done! kubectl is now configured to use "addons-647907" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 13:19:09 addons-647907 crio[833]: time="2025-11-24T13:19:09.147644105Z" level=info msg="Removed pod sandbox: d2252dcdee50b983ed762ef89c57cc9b1674767b0b73d9ccf0124c7aeafa2a1e" id=f2e3f00f-91b9-4b99-a8b0-62fa953c0727 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.60363572Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-kc68q/POD" id=33346659-6941-486d-bf6e-3b852cf44976 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.603710838Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.63598613Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kc68q Namespace:default ID:6d541c889d99f9c72de4ed7c220c97109d1c87c1f5f431e7e5ead546597eff3e UID:601e5b7d-bdc2-45b9-83ee-a5127a3e086a NetNS:/var/run/netns/e62a948a-37e6-4b07-963b-914f0690392d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40017a71d8}] Aliases:map[]}"
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.636174579Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-kc68q to CNI network \"kindnet\" (type=ptp)"
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.64885579Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kc68q Namespace:default ID:6d541c889d99f9c72de4ed7c220c97109d1c87c1f5f431e7e5ead546597eff3e UID:601e5b7d-bdc2-45b9-83ee-a5127a3e086a NetNS:/var/run/netns/e62a948a-37e6-4b07-963b-914f0690392d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40017a71d8}] Aliases:map[]}"
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.649238841Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-kc68q for CNI network kindnet (type=ptp)"
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.652685721Z" level=info msg="Ran pod sandbox 6d541c889d99f9c72de4ed7c220c97109d1c87c1f5f431e7e5ead546597eff3e with infra container: default/hello-world-app-5d498dc89-kc68q/POD" id=33346659-6941-486d-bf6e-3b852cf44976 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.659139132Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d920f291-0d5e-43cc-88a3-d25a7cb52cc1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.659638894Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d920f291-0d5e-43cc-88a3-d25a7cb52cc1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.659866604Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=d920f291-0d5e-43cc-88a3-d25a7cb52cc1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.663732433Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=056f7546-6512-4563-8794-c1251e1e28b1 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:20:02 addons-647907 crio[833]: time="2025-11-24T13:20:02.672450059Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.3347908Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=056f7546-6512-4563-8794-c1251e1e28b1 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.33705801Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5ebf1b1d-49fc-473d-9c17-cd2fc057c50d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.340933923Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1ddff275-72a4-4c41-a445-31e62f57c67e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.351479789Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-kc68q/hello-world-app" id=7400130d-56c8-4875-bd6e-5c8b8ba58179 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.351604878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.386129328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.386374868Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cc2d5a78ac624ecde369624e68815494870ccce85ec62313b96cfedd6c12d877/merged/etc/passwd: no such file or directory"
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.386401937Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cc2d5a78ac624ecde369624e68815494870ccce85ec62313b96cfedd6c12d877/merged/etc/group: no such file or directory"
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.38676871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.406541589Z" level=info msg="Created container da64528bc0b328db9d1ad83d260eddb553efd2338c1eb91e59addb78d2ff1063: default/hello-world-app-5d498dc89-kc68q/hello-world-app" id=7400130d-56c8-4875-bd6e-5c8b8ba58179 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.411584199Z" level=info msg="Starting container: da64528bc0b328db9d1ad83d260eddb553efd2338c1eb91e59addb78d2ff1063" id=63c12f35-65de-4179-8fb0-ae0443dd2e61 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:20:03 addons-647907 crio[833]: time="2025-11-24T13:20:03.414406384Z" level=info msg="Started container" PID=7023 containerID=da64528bc0b328db9d1ad83d260eddb553efd2338c1eb91e59addb78d2ff1063 description=default/hello-world-app-5d498dc89-kc68q/hello-world-app id=63c12f35-65de-4179-8fb0-ae0443dd2e61 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d541c889d99f9c72de4ed7c220c97109d1c87c1f5f431e7e5ead546597eff3e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	da64528bc0b32       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   6d541c889d99f       hello-world-app-5d498dc89-kc68q            default
	03c98258ed576       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   0d98c698022fa       nginx                                      default
	76fb47c5dcc70       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   cd7fa17dc6ef2       busybox                                    default
	503deb47d65b1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   4112ee0d386f5       gcp-auth-78565c9fb4-kn6st                  gcp-auth
	f5728fafdcfd6       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	8ae9f6e9c70db       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	5b3fa06bb0192       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	973aff8a30a4f       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	7f05ea466739f       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   daf0032de8fde       ingress-nginx-controller-6c8bf45fb-m8nwb   ingress-nginx
	3a0f4619a26f4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   6f5ebc6fa0563       gadget-6cbf2                               gadget
	6b55d5b71eba6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	7e0552d507b6a       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   fdef0c06db5cb       nvidia-device-plugin-daemonset-dn469       kube-system
	5b5f53029c3ca       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              patch                                    0                   edfe382f748a1       ingress-nginx-admission-patch-ckrmb        ingress-nginx
	1868812a69065       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   3947e21735713       registry-proxy-9hgsb                       kube-system
	ee0e8bc0faa14       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   11bb54e340385       yakd-dashboard-5ff678cb9-rt8kg             yakd-dashboard
	e66116781aa16       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   f02b00b45971d       csi-hostpath-attacher-0                    kube-system
	2d0baf6e27693       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   55418a29a9ab7       kube-ingress-dns-minikube                  kube-system
	13013bc5bc293       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   2485fe917df5d       ingress-nginx-admission-create-zgt7r       ingress-nginx
	0ba479f65d38f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   177375b09ca3b       snapshot-controller-7d9fbc56b8-49v4w       kube-system
	e67c8d2fe588f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	eb8a3b03da33f       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   e2becbd251a6a       csi-hostpath-resizer-0                     kube-system
	7ee4d3e3512b4       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   180af60765641       registry-6b586f9694-2l7kt                  kube-system
	36b6ae96ef25a       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   7739a02243c3f       local-path-provisioner-648f6765c9-dn62k    local-path-storage
	93bbb33eaf24e       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   4241c1d9a8ea8       cloud-spanner-emulator-5bdddb765-s88qb     default
	06594980d0770       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   56d7024f34a8f       snapshot-controller-7d9fbc56b8-qnq5w       kube-system
	586f58fd71be7       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   583f0fbfe330f       metrics-server-85b7d694d7-xlsnf            kube-system
	8a76716af61b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   7a35f43799697       storage-provisioner                        kube-system
	612efd74b90ce       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   88fbe6dca4fce       coredns-66bc5c9577-hhndw                   kube-system
	c4588bdbb8946       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   35fa956a8e883       kube-proxy-n8mpw                           kube-system
	646560c4253bb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   4af34a45b733f       kindnet-cq7x5                              kube-system
	47bfa25635dec       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   5e816d39f4177       kube-controller-manager-addons-647907      kube-system
	9bbf65dfab06c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   683d1e2e165aa       etcd-addons-647907                         kube-system
	9eb49b73252f4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   073bcb08ee3c0       kube-scheduler-addons-647907               kube-system
	448d21b7cb222       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   cb92b885a5606       kube-apiserver-addons-647907               kube-system
	
	
	==> coredns [612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46] <==
	[INFO] 10.244.0.10:34902 - 40936 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002935015s
	[INFO] 10.244.0.10:34902 - 13888 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000138331s
	[INFO] 10.244.0.10:34902 - 47191 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000185831s
	[INFO] 10.244.0.10:52476 - 40277 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000187538s
	[INFO] 10.244.0.10:52476 - 40499 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000303166s
	[INFO] 10.244.0.10:59432 - 58246 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112149s
	[INFO] 10.244.0.10:59432 - 58024 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081921s
	[INFO] 10.244.0.10:53141 - 15818 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080526s
	[INFO] 10.244.0.10:53141 - 15389 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000290226s
	[INFO] 10.244.0.10:47645 - 577 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001614537s
	[INFO] 10.244.0.10:47645 - 1014 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001727539s
	[INFO] 10.244.0.10:49302 - 64609 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146644s
	[INFO] 10.244.0.10:49302 - 65021 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090988s
	[INFO] 10.244.0.21:51671 - 8025 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000199994s
	[INFO] 10.244.0.21:41638 - 47622 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000255765s
	[INFO] 10.244.0.21:37056 - 37264 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166524s
	[INFO] 10.244.0.21:60926 - 48178 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000215083s
	[INFO] 10.244.0.21:50195 - 15223 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000282029s
	[INFO] 10.244.0.21:38673 - 7834 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000235284s
	[INFO] 10.244.0.21:56838 - 60541 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003339844s
	[INFO] 10.244.0.21:46889 - 6244 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003542266s
	[INFO] 10.244.0.21:34726 - 38234 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001448102s
	[INFO] 10.244.0.21:37632 - 7581 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002997022s
	[INFO] 10.244.0.23:48545 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000185019s
	[INFO] 10.244.0.23:54630 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090798s
	
	
	==> describe nodes <==
	Name:               addons-647907
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-647907
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=addons-647907
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_15_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-647907
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-647907"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:15:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-647907
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:19:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:19:54 +0000   Mon, 24 Nov 2025 13:15:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:19:54 +0000   Mon, 24 Nov 2025 13:15:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:19:54 +0000   Mon, 24 Nov 2025 13:15:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:19:54 +0000   Mon, 24 Nov 2025 13:15:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-647907
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                75c33a32-5988-45d0-af8d-ed87a64979b7
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     cloud-spanner-emulator-5bdddb765-s88qb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  default                     hello-world-app-5d498dc89-kc68q             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-6cbf2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  gcp-auth                    gcp-auth-78565c9fb4-kn6st                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-m8nwb    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m45s
	  kube-system                 coredns-66bc5c9577-hhndw                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m50s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 csi-hostpathplugin-89nqp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 etcd-addons-647907                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m57s
	  kube-system                 kindnet-cq7x5                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m51s
	  kube-system                 kube-apiserver-addons-647907                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-controller-manager-addons-647907       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-proxy-n8mpw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-scheduler-addons-647907                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 metrics-server-85b7d694d7-xlsnf             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m47s
	  kube-system                 nvidia-device-plugin-daemonset-dn469        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 registry-6b586f9694-2l7kt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 registry-creds-764b6fb674-pf26d             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 registry-proxy-9hgsb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 snapshot-controller-7d9fbc56b8-49v4w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 snapshot-controller-7d9fbc56b8-qnq5w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  local-path-storage          local-path-provisioner-648f6765c9-dn62k     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-rt8kg              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m49s  kube-proxy       
	  Normal   Starting                 4m56s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m56s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m55s  kubelet          Node addons-647907 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m55s  kubelet          Node addons-647907 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m55s  kubelet          Node addons-647907 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m52s  node-controller  Node addons-647907 event: Registered Node addons-647907 in Controller
	  Normal   NodeReady                4m10s  kubelet          Node addons-647907 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015884] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504458] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033874] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.788873] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.144374] kauditd_printk_skb: 36 callbacks suppressed
	[Nov24 13:13] kauditd_printk_skb: 5 callbacks suppressed
	[Nov24 13:15] overlayfs: idmapped layers are currently not supported
	[  +0.074288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382] <==
	{"level":"warn","ts":"2025-11-24T13:15:04.405128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.424063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.446171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.473700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.499597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.531456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.558560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.584860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.617359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.689799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.724105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.753274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.778455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.815676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.867509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.874366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.909601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.948414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:05.083560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:20.664303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:20.682619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:42.916932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:42.931466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:42.975420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:42.985102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35926","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [503deb47d65b159e876f06a34337930d9e17d7917403b7ae7aa1e094ee40e4e2] <==
	2025/11/24 13:17:02 GCP Auth Webhook started!
	2025/11/24 13:17:05 Ready to marshal response ...
	2025/11/24 13:17:05 Ready to write response ...
	2025/11/24 13:17:05 Ready to marshal response ...
	2025/11/24 13:17:05 Ready to write response ...
	2025/11/24 13:17:05 Ready to marshal response ...
	2025/11/24 13:17:05 Ready to write response ...
	2025/11/24 13:17:25 Ready to marshal response ...
	2025/11/24 13:17:25 Ready to write response ...
	2025/11/24 13:17:30 Ready to marshal response ...
	2025/11/24 13:17:30 Ready to write response ...
	2025/11/24 13:17:30 Ready to marshal response ...
	2025/11/24 13:17:30 Ready to write response ...
	2025/11/24 13:17:39 Ready to marshal response ...
	2025/11/24 13:17:39 Ready to write response ...
	2025/11/24 13:17:41 Ready to marshal response ...
	2025/11/24 13:17:41 Ready to write response ...
	2025/11/24 13:17:56 Ready to marshal response ...
	2025/11/24 13:17:56 Ready to write response ...
	2025/11/24 13:18:10 Ready to marshal response ...
	2025/11/24 13:18:10 Ready to write response ...
	2025/11/24 13:20:02 Ready to marshal response ...
	2025/11/24 13:20:02 Ready to write response ...
	
	
	==> kernel <==
	 13:20:04 up  1:02,  0 user,  load average: 0.66, 0.98, 0.53
	Linux addons-647907 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa] <==
	I1124 13:17:54.730103       1 main.go:301] handling current node
	I1124 13:18:04.730097       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:18:04.730158       1 main.go:301] handling current node
	I1124 13:18:14.730375       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:18:14.730497       1 main.go:301] handling current node
	I1124 13:18:24.729791       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:18:24.729822       1 main.go:301] handling current node
	I1124 13:18:34.738715       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:18:34.738823       1 main.go:301] handling current node
	I1124 13:18:44.737739       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:18:44.737772       1 main.go:301] handling current node
	I1124 13:18:54.729327       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:18:54.729375       1 main.go:301] handling current node
	I1124 13:19:04.730341       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:19:04.730373       1 main.go:301] handling current node
	I1124 13:19:14.729401       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:19:14.729436       1 main.go:301] handling current node
	I1124 13:19:24.735448       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:19:24.735483       1 main.go:301] handling current node
	I1124 13:19:34.730327       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:19:34.730365       1 main.go:301] handling current node
	I1124 13:19:44.730073       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:19:44.730223       1 main.go:301] handling current node
	I1124 13:19:54.729680       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:19:54.729774       1 main.go:301] handling current node
	
	
	==> kube-apiserver [448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d] <==
	W1124 13:15:42.932063       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:15:42.962269       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:15:42.982702       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:15:54.925170       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.215.71:443: connect: connection refused
	E1124 13:15:54.925215       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.215.71:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:54.925623       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.215.71:443: connect: connection refused
	E1124 13:15:54.925652       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.215.71:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:54.984073       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.215.71:443: connect: connection refused
	E1124 13:15:54.984837       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.215.71:443: connect: connection refused" logger="UnhandledError"
	E1124 13:16:01.567187       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.27.120:443: connect: connection refused" logger="UnhandledError"
	W1124 13:16:01.567542       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 13:16:01.567621       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 13:16:01.569327       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.27.120:443: connect: connection refused" logger="UnhandledError"
	E1124 13:16:01.573587       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.27.120:443: connect: connection refused" logger="UnhandledError"
	I1124 13:16:01.684529       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 13:17:14.062475       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33654: use of closed network connection
	E1124 13:17:14.295103       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33680: use of closed network connection
	E1124 13:17:14.421535       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33694: use of closed network connection
	I1124 13:17:41.410238       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1124 13:17:41.697173       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.53.62"}
	I1124 13:18:07.342753       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1124 13:20:02.449579       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.176.99"}
	
	
	==> kube-controller-manager [47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66] <==
	I1124 13:15:12.947535       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:15:12.947575       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 13:15:12.947616       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 13:15:12.947644       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:15:12.947680       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 13:15:12.947707       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 13:15:12.947733       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:15:12.947774       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:15:12.947816       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 13:15:12.951857       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:15:12.951924       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 13:15:12.951958       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 13:15:12.951984       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 13:15:12.951989       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 13:15:12.951994       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 13:15:12.963510       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-647907" podCIDRs=["10.244.0.0/24"]
	E1124 13:15:17.928034       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1124 13:15:42.909528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 13:15:42.909680       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 13:15:42.909725       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 13:15:42.950425       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 13:15:42.955654       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 13:15:43.010060       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:15:43.055990       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:15:57.899017       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52] <==
	I1124 13:15:14.921607       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:15:15.037310       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:15:15.142568       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:15:15.142608       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 13:15:15.142670       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:15:15.218939       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:15:15.219025       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:15:15.232444       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:15:15.239329       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:15:15.239376       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:15:15.240853       1 config.go:200] "Starting service config controller"
	I1124 13:15:15.240871       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:15:15.240887       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:15:15.240891       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:15:15.240909       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:15:15.240913       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:15:15.241530       1 config.go:309] "Starting node config controller"
	I1124 13:15:15.241549       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:15:15.241561       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:15:15.341036       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:15:15.341071       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:15:15.341113       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167] <==
	E1124 13:15:06.116375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:15:06.118961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:15:06.119036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 13:15:06.119106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:15:06.119158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:15:06.119194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:15:06.119249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:15:06.119302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:15:06.119340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:15:06.119471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:15:06.119526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:15:06.119564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:15:06.955399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:15:06.987036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:15:06.988249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:15:07.122684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:15:07.173491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:15:07.182166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:15:07.194924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:15:07.250838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:15:07.284535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:15:07.286928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:15:07.323880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:15:07.356123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1124 13:15:10.392250       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:18:10 addons-647907 kubelet[1273]: I1124 13:18:10.845307    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df27cfa-608b-47c2-9b74-3c1771f7cee2" path="/var/lib/kubelet/pods/9df27cfa-608b-47c2-9b74-3c1771f7cee2/volumes"
	Nov 24 13:18:10 addons-647907 kubelet[1273]: W1124 13:18:10.877445    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3/crio-d2252dcdee50b983ed762ef89c57cc9b1674767b0b73d9ccf0124c7aeafa2a1e WatchSource:0}: Error finding container d2252dcdee50b983ed762ef89c57cc9b1674767b0b73d9ccf0124c7aeafa2a1e: Status 404 returned error can't find the container with id d2252dcdee50b983ed762ef89c57cc9b1674767b0b73d9ccf0124c7aeafa2a1e
	Nov 24 13:18:11 addons-647907 kubelet[1273]: I1124 13:18:11.884741    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=1.5371587469999999 podStartE2EDuration="1.884720132s" podCreationTimestamp="2025-11-24 13:18:10 +0000 UTC" firstStartedPulling="2025-11-24 13:18:10.879999754 +0000 UTC m=+182.205986265" lastFinishedPulling="2025-11-24 13:18:11.227561139 +0000 UTC m=+182.553547650" observedRunningTime="2025-11-24 13:18:11.883464007 +0000 UTC m=+183.209450534" watchObservedRunningTime="2025-11-24 13:18:11.884720132 +0000 UTC m=+183.210706643"
	Nov 24 13:18:17 addons-647907 kubelet[1273]: I1124 13:18:17.895110    1273 scope.go:117] "RemoveContainer" containerID="e019b4d22d7a5899e3b2382f13946a45f59566b364b495509520384a1fa91f8b"
	Nov 24 13:18:17 addons-647907 kubelet[1273]: I1124 13:18:17.906046    1273 scope.go:117] "RemoveContainer" containerID="e019b4d22d7a5899e3b2382f13946a45f59566b364b495509520384a1fa91f8b"
	Nov 24 13:18:17 addons-647907 kubelet[1273]: E1124 13:18:17.907091    1273 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e019b4d22d7a5899e3b2382f13946a45f59566b364b495509520384a1fa91f8b\": container with ID starting with e019b4d22d7a5899e3b2382f13946a45f59566b364b495509520384a1fa91f8b not found: ID does not exist" containerID="e019b4d22d7a5899e3b2382f13946a45f59566b364b495509520384a1fa91f8b"
	Nov 24 13:18:17 addons-647907 kubelet[1273]: I1124 13:18:17.907138    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e019b4d22d7a5899e3b2382f13946a45f59566b364b495509520384a1fa91f8b"} err="failed to get container status \"e019b4d22d7a5899e3b2382f13946a45f59566b364b495509520384a1fa91f8b\": rpc error: code = NotFound desc = could not find container \"e019b4d22d7a5899e3b2382f13946a45f59566b364b495509520384a1fa91f8b\": container with ID starting with e019b4d22d7a5899e3b2382f13946a45f59566b364b495509520384a1fa91f8b not found: ID does not exist"
	Nov 24 13:18:17 addons-647907 kubelet[1273]: I1124 13:18:17.942173    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/565f970f-28e0-49be-b166-1412a10b4ccf-gcp-creds\") pod \"565f970f-28e0-49be-b166-1412a10b4ccf\" (UID: \"565f970f-28e0-49be-b166-1412a10b4ccf\") "
	Nov 24 13:18:17 addons-647907 kubelet[1273]: I1124 13:18:17.942236    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw582\" (UniqueName: \"kubernetes.io/projected/565f970f-28e0-49be-b166-1412a10b4ccf-kube-api-access-mw582\") pod \"565f970f-28e0-49be-b166-1412a10b4ccf\" (UID: \"565f970f-28e0-49be-b166-1412a10b4ccf\") "
	Nov 24 13:18:17 addons-647907 kubelet[1273]: I1124 13:18:17.942360    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/565f970f-28e0-49be-b166-1412a10b4ccf-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "565f970f-28e0-49be-b166-1412a10b4ccf" (UID: "565f970f-28e0-49be-b166-1412a10b4ccf"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 24 13:18:17 addons-647907 kubelet[1273]: I1124 13:18:17.942380    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^05c52620-c938-11f0-b75c-42254d90a55a\") pod \"565f970f-28e0-49be-b166-1412a10b4ccf\" (UID: \"565f970f-28e0-49be-b166-1412a10b4ccf\") "
	Nov 24 13:18:17 addons-647907 kubelet[1273]: I1124 13:18:17.942538    1273 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/565f970f-28e0-49be-b166-1412a10b4ccf-gcp-creds\") on node \"addons-647907\" DevicePath \"\""
	Nov 24 13:18:17 addons-647907 kubelet[1273]: I1124 13:18:17.947721    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565f970f-28e0-49be-b166-1412a10b4ccf-kube-api-access-mw582" (OuterVolumeSpecName: "kube-api-access-mw582") pod "565f970f-28e0-49be-b166-1412a10b4ccf" (UID: "565f970f-28e0-49be-b166-1412a10b4ccf"). InnerVolumeSpecName "kube-api-access-mw582". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 24 13:18:17 addons-647907 kubelet[1273]: I1124 13:18:17.950560    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^05c52620-c938-11f0-b75c-42254d90a55a" (OuterVolumeSpecName: "task-pv-storage") pod "565f970f-28e0-49be-b166-1412a10b4ccf" (UID: "565f970f-28e0-49be-b166-1412a10b4ccf"). InnerVolumeSpecName "pvc-b5172283-865f-4b31-adf1-8ce358edd21f". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 24 13:18:18 addons-647907 kubelet[1273]: I1124 13:18:18.043515    1273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mw582\" (UniqueName: \"kubernetes.io/projected/565f970f-28e0-49be-b166-1412a10b4ccf-kube-api-access-mw582\") on node \"addons-647907\" DevicePath \"\""
	Nov 24 13:18:18 addons-647907 kubelet[1273]: I1124 13:18:18.043767    1273 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-b5172283-865f-4b31-adf1-8ce358edd21f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^05c52620-c938-11f0-b75c-42254d90a55a\") on node \"addons-647907\" "
	Nov 24 13:18:18 addons-647907 kubelet[1273]: I1124 13:18:18.049206    1273 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-b5172283-865f-4b31-adf1-8ce358edd21f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^05c52620-c938-11f0-b75c-42254d90a55a") on node "addons-647907"
	Nov 24 13:18:18 addons-647907 kubelet[1273]: I1124 13:18:18.144858    1273 reconciler_common.go:299] "Volume detached for volume \"pvc-b5172283-865f-4b31-adf1-8ce358edd21f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^05c52620-c938-11f0-b75c-42254d90a55a\") on node \"addons-647907\" DevicePath \"\""
	Nov 24 13:18:18 addons-647907 kubelet[1273]: I1124 13:18:18.845270    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565f970f-28e0-49be-b166-1412a10b4ccf" path="/var/lib/kubelet/pods/565f970f-28e0-49be-b166-1412a10b4ccf/volumes"
	Nov 24 13:18:33 addons-647907 kubelet[1273]: I1124 13:18:33.842185    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-2l7kt" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:19:09 addons-647907 kubelet[1273]: I1124 13:19:09.842343    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dn469" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:19:28 addons-647907 kubelet[1273]: I1124 13:19:28.843205    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9hgsb" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:19:50 addons-647907 kubelet[1273]: I1124 13:19:50.842473    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-2l7kt" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:20:02 addons-647907 kubelet[1273]: I1124 13:20:02.350449    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k677\" (UniqueName: \"kubernetes.io/projected/601e5b7d-bdc2-45b9-83ee-a5127a3e086a-kube-api-access-2k677\") pod \"hello-world-app-5d498dc89-kc68q\" (UID: \"601e5b7d-bdc2-45b9-83ee-a5127a3e086a\") " pod="default/hello-world-app-5d498dc89-kc68q"
	Nov 24 13:20:02 addons-647907 kubelet[1273]: I1124 13:20:02.350502    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/601e5b7d-bdc2-45b9-83ee-a5127a3e086a-gcp-creds\") pod \"hello-world-app-5d498dc89-kc68q\" (UID: \"601e5b7d-bdc2-45b9-83ee-a5127a3e086a\") " pod="default/hello-world-app-5d498dc89-kc68q"
	
	
	==> storage-provisioner [8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a] <==
	W1124 13:19:39.525999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:41.528971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:41.533621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:43.537379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:43.544246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:45.547201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:45.551412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:47.555087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:47.562273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:49.565044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:49.569460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:51.572578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:51.577105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:53.580111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:53.584741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:55.588385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:55.595105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:57.598267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:57.602720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:59.606094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:59.610878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:01.614055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:01.620081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:03.626585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:03.633266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-647907 -n addons-647907
helpers_test.go:269: (dbg) Run:  kubectl --context addons-647907 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-zgt7r ingress-nginx-admission-patch-ckrmb registry-creds-764b6fb674-pf26d
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-647907 describe pod ingress-nginx-admission-create-zgt7r ingress-nginx-admission-patch-ckrmb registry-creds-764b6fb674-pf26d
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-647907 describe pod ingress-nginx-admission-create-zgt7r ingress-nginx-admission-patch-ckrmb registry-creds-764b6fb674-pf26d: exit status 1 (139.870977ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zgt7r" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ckrmb" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-pf26d" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-647907 describe pod ingress-nginx-admission-create-zgt7r ingress-nginx-admission-patch-ckrmb registry-creds-764b6fb674-pf26d: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (356.357808ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:20:06.050492   14903 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:20:06.050790   14903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:20:06.050830   14903 out.go:374] Setting ErrFile to fd 2...
	I1124 13:20:06.050854   14903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:20:06.051637   14903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:20:06.052103   14903 mustload.go:66] Loading cluster: addons-647907
	I1124 13:20:06.052801   14903 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:20:06.052825   14903 addons.go:622] checking whether the cluster is paused
	I1124 13:20:06.052969   14903 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:20:06.052988   14903 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:20:06.053709   14903 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:20:06.083471   14903 ssh_runner.go:195] Run: systemctl --version
	I1124 13:20:06.083539   14903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:20:06.110604   14903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:20:06.222619   14903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:20:06.222748   14903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:20:06.269019   14903 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:20:06.269082   14903 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:20:06.269103   14903 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:20:06.269128   14903 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:20:06.269148   14903 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:20:06.269169   14903 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:20:06.269190   14903 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:20:06.269210   14903 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:20:06.269232   14903 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:20:06.269257   14903 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:20:06.269279   14903 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:20:06.269307   14903 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:20:06.269328   14903 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:20:06.269350   14903 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:20:06.269371   14903 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:20:06.269405   14903 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:20:06.269434   14903 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:20:06.269460   14903 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:20:06.269485   14903 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:20:06.269504   14903 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:20:06.269527   14903 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:20:06.269548   14903 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:20:06.269583   14903 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:20:06.269602   14903 cri.go:89] found id: ""
	I1124 13:20:06.269671   14903 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:20:06.296617   14903 out.go:203] 
	W1124 13:20:06.300302   14903 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:20:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:20:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:20:06.300341   14903 out.go:285] * 
	* 
	W1124 13:20:06.304792   14903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:20:06.308119   14903 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable ingress --alsologtostderr -v=1: exit status 11 (321.242001ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:20:06.384562   14958 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:20:06.384784   14958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:20:06.384808   14958 out.go:374] Setting ErrFile to fd 2...
	I1124 13:20:06.384827   14958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:20:06.385101   14958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:20:06.385405   14958 mustload.go:66] Loading cluster: addons-647907
	I1124 13:20:06.385800   14958 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:20:06.385837   14958 addons.go:622] checking whether the cluster is paused
	I1124 13:20:06.385972   14958 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:20:06.386000   14958 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:20:06.386541   14958 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:20:06.404550   14958 ssh_runner.go:195] Run: systemctl --version
	I1124 13:20:06.404603   14958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:20:06.434606   14958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:20:06.551083   14958 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:20:06.551180   14958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:20:06.597108   14958 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:20:06.597127   14958 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:20:06.597132   14958 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:20:06.597137   14958 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:20:06.597141   14958 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:20:06.597145   14958 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:20:06.597149   14958 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:20:06.597152   14958 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:20:06.597155   14958 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:20:06.597162   14958 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:20:06.597165   14958 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:20:06.597168   14958 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:20:06.597172   14958 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:20:06.597175   14958 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:20:06.597178   14958 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:20:06.597187   14958 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:20:06.597190   14958 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:20:06.597194   14958 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:20:06.597198   14958 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:20:06.597201   14958 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:20:06.597205   14958 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:20:06.597208   14958 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:20:06.597211   14958 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:20:06.597214   14958 cri.go:89] found id: ""
	I1124 13:20:06.597264   14958 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:20:06.618717   14958 out.go:203] 
	W1124 13:20:06.621855   14958 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:20:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:20:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:20:06.621961   14958 out.go:285] * 
	* 
	W1124 13:20:06.627086   14958 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:20:06.630297   14958 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-6cbf2" [a351c460-e3b3-402d-a351-0ba7fff6effc] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003055282s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (263.592915ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:18:25.277745   13775 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:18:25.277943   13775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:25.277952   13775 out.go:374] Setting ErrFile to fd 2...
	I1124 13:18:25.277958   13775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:25.278267   13775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:18:25.278599   13775 mustload.go:66] Loading cluster: addons-647907
	I1124 13:18:25.279062   13775 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:25.279083   13775 addons.go:622] checking whether the cluster is paused
	I1124 13:18:25.279298   13775 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:25.279312   13775 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:18:25.279926   13775 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:18:25.299648   13775 ssh_runner.go:195] Run: systemctl --version
	I1124 13:18:25.299705   13775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:18:25.316394   13775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:18:25.421905   13775 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:18:25.422016   13775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:18:25.454458   13775 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:18:25.454482   13775 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:18:25.454492   13775 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:18:25.454497   13775 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:18:25.454501   13775 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:18:25.454508   13775 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:18:25.454511   13775 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:18:25.454515   13775 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:18:25.454518   13775 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:18:25.454524   13775 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:18:25.454527   13775 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:18:25.454530   13775 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:18:25.454534   13775 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:18:25.454538   13775 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:18:25.454546   13775 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:18:25.454551   13775 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:18:25.454555   13775 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:18:25.454559   13775 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:18:25.454562   13775 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:18:25.454565   13775 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:18:25.454570   13775 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:18:25.454576   13775 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:18:25.454580   13775 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:18:25.454583   13775 cri.go:89] found id: ""
	I1124 13:18:25.454632   13775 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:18:25.469015   13775 out.go:203] 
	W1124 13:18:25.471905   13775 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:18:25.471936   13775 out.go:285] * 
	* 
	W1124 13:18:25.476262   13775 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:18:25.479197   13775 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.466129ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-xlsnf" [1ef721be-841d-4a36-949e-55c1b518e346] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004771157s
addons_test.go:463: (dbg) Run:  kubectl --context addons-647907 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (268.824242ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:17:40.893099   12685 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:17:40.893310   12685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:40.893323   12685 out.go:374] Setting ErrFile to fd 2...
	I1124 13:17:40.893329   12685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:40.893590   12685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:17:40.893876   12685 mustload.go:66] Loading cluster: addons-647907
	I1124 13:17:40.894245   12685 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:40.894263   12685 addons.go:622] checking whether the cluster is paused
	I1124 13:17:40.894371   12685 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:40.894385   12685 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:17:40.894877   12685 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:17:40.916072   12685 ssh_runner.go:195] Run: systemctl --version
	I1124 13:17:40.916137   12685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:17:40.934956   12685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:17:41.043643   12685 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:17:41.043730   12685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:17:41.080664   12685 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:17:41.080712   12685 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:17:41.080719   12685 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:17:41.080723   12685 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:17:41.080727   12685 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:17:41.080731   12685 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:17:41.080735   12685 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:17:41.080739   12685 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:17:41.080742   12685 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:17:41.080754   12685 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:17:41.080763   12685 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:17:41.080771   12685 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:17:41.080774   12685 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:17:41.080778   12685 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:17:41.080781   12685 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:17:41.080786   12685 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:17:41.080789   12685 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:17:41.080793   12685 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:17:41.080796   12685 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:17:41.080799   12685 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:17:41.080805   12685 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:17:41.080811   12685 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:17:41.080814   12685 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:17:41.080820   12685 cri.go:89] found id: ""
	I1124 13:17:41.080878   12685 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:17:41.095909   12685 out.go:203] 
	W1124 13:17:41.098787   12685 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:17:41.098814   12685 out.go:285] * 
	* 
	W1124 13:17:41.103143   12685 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:17:41.106181   12685 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 13:17:39.744546    4611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1124 13:17:39.749357    4611 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 13:17:39.749378    4611 kapi.go:107] duration metric: took 4.838867ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.847876ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-647907 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-647907 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [9df27cfa-608b-47c2-9b74-3c1771f7cee2] Pending
helpers_test.go:352: "task-pv-pod" [9df27cfa-608b-47c2-9b74-3c1771f7cee2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [9df27cfa-608b-47c2-9b74-3c1771f7cee2] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004696707s
addons_test.go:572: (dbg) Run:  kubectl --context addons-647907 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-647907 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-647907 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-647907 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-647907 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-647907 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-647907 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [565f970f-28e0-49be-b166-1412a10b4ccf] Pending
helpers_test.go:352: "task-pv-pod-restore" [565f970f-28e0-49be-b166-1412a10b4ccf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [565f970f-28e0-49be-b166-1412a10b4ccf] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003223007s
addons_test.go:614: (dbg) Run:  kubectl --context addons-647907 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-647907 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-647907 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (296.59351ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:18:18.713447   13670 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:18:18.713676   13670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:18.713689   13670 out.go:374] Setting ErrFile to fd 2...
	I1124 13:18:18.713696   13670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:18.714015   13670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:18:18.714445   13670 mustload.go:66] Loading cluster: addons-647907
	I1124 13:18:18.714898   13670 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:18.714918   13670 addons.go:622] checking whether the cluster is paused
	I1124 13:18:18.715072   13670 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:18.715089   13670 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:18:18.715799   13670 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:18:18.733522   13670 ssh_runner.go:195] Run: systemctl --version
	I1124 13:18:18.734085   13670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:18:18.752773   13670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:18:18.862026   13670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:18:18.862151   13670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:18:18.893232   13670 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:18:18.893259   13670 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:18:18.893264   13670 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:18:18.893268   13670 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:18:18.893272   13670 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:18:18.893275   13670 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:18:18.893278   13670 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:18:18.893282   13670 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:18:18.893285   13670 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:18:18.893292   13670 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:18:18.893295   13670 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:18:18.893299   13670 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:18:18.893303   13670 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:18:18.893306   13670 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:18:18.893309   13670 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:18:18.893315   13670 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:18:18.893321   13670 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:18:18.893324   13670 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:18:18.893328   13670 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:18:18.893331   13670 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:18:18.893341   13670 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:18:18.893350   13670 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:18:18.893354   13670 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:18:18.893369   13670 cri.go:89] found id: ""
	I1124 13:18:18.893424   13670 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:18:18.914207   13670 out.go:203] 
	W1124 13:18:18.917086   13670 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:18:18.917118   13670 out.go:285] * 
	* 
	W1124 13:18:18.921424   13670 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:18:18.924472   13670 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (286.788671ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:18:18.993732   13714 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:18:18.993951   13714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:18.993964   13714 out.go:374] Setting ErrFile to fd 2...
	I1124 13:18:18.993970   13714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:18.994257   13714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:18:18.994629   13714 mustload.go:66] Loading cluster: addons-647907
	I1124 13:18:18.995440   13714 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:18.995464   13714 addons.go:622] checking whether the cluster is paused
	I1124 13:18:18.995599   13714 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:18.995671   13714 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:18:18.996222   13714 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:18:19.022296   13714 ssh_runner.go:195] Run: systemctl --version
	I1124 13:18:19.022357   13714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:18:19.040978   13714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:18:19.146026   13714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:18:19.146108   13714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:18:19.185914   13714 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:18:19.185946   13714 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:18:19.185952   13714 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:18:19.185956   13714 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:18:19.185960   13714 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:18:19.185963   13714 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:18:19.185966   13714 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:18:19.185969   13714 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:18:19.185973   13714 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:18:19.185984   13714 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:18:19.185990   13714 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:18:19.185993   13714 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:18:19.185996   13714 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:18:19.186001   13714 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:18:19.186006   13714 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:18:19.186014   13714 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:18:19.186018   13714 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:18:19.186023   13714 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:18:19.186026   13714 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:18:19.186028   13714 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:18:19.186040   13714 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:18:19.186043   13714 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:18:19.186046   13714 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:18:19.186049   13714 cri.go:89] found id: ""
	I1124 13:18:19.186105   13714 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:18:19.201416   13714 out.go:203] 
	W1124 13:18:19.204279   13714 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:18:19.204305   13714 out.go:285] * 
	* 
	W1124 13:18:19.208664   13714 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:18:19.211623   13714 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (39.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-647907 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-647907 --alsologtostderr -v=1: exit status 11 (266.520715ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:17:14.732716   11490 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:17:14.734134   11490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:14.734143   11490 out.go:374] Setting ErrFile to fd 2...
	I1124 13:17:14.734149   11490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:14.735234   11490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:17:14.735792   11490 mustload.go:66] Loading cluster: addons-647907
	I1124 13:17:14.736790   11490 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:14.736811   11490 addons.go:622] checking whether the cluster is paused
	I1124 13:17:14.736954   11490 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:14.736965   11490 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:17:14.737665   11490 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:17:14.757965   11490 ssh_runner.go:195] Run: systemctl --version
	I1124 13:17:14.758030   11490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:17:14.777288   11490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:17:14.885802   11490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:17:14.885934   11490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:17:14.924329   11490 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:17:14.924351   11490 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:17:14.924357   11490 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:17:14.924361   11490 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:17:14.924364   11490 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:17:14.924368   11490 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:17:14.924371   11490 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:17:14.924374   11490 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:17:14.924377   11490 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:17:14.924383   11490 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:17:14.924387   11490 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:17:14.924390   11490 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:17:14.924393   11490 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:17:14.924396   11490 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:17:14.924400   11490 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:17:14.924405   11490 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:17:14.924408   11490 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:17:14.924413   11490 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:17:14.924416   11490 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:17:14.924419   11490 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:17:14.924425   11490 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:17:14.924428   11490 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:17:14.924431   11490 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:17:14.924434   11490 cri.go:89] found id: ""
	I1124 13:17:14.924490   11490 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:17:14.937490   11490 out.go:203] 
	W1124 13:17:14.938672   11490 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:17:14.938692   11490 out.go:285] * 
	* 
	W1124 13:17:14.942946   11490 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:17:14.944382   11490 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-647907 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-647907
helpers_test.go:243: (dbg) docker inspect addons-647907:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3",
	        "Created": "2025-11-24T13:14:44.795405416Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5764,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:14:44.8616038Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3/hostname",
	        "HostsPath": "/var/lib/docker/containers/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3/hosts",
	        "LogPath": "/var/lib/docker/containers/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3-json.log",
	        "Name": "/addons-647907",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-647907:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-647907",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3",
	                "LowerDir": "/var/lib/docker/overlay2/83dc36c1c0d9c3009a399933d1eff6bd8f53c389406be1e2b1643f996c3d4cf7-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/83dc36c1c0d9c3009a399933d1eff6bd8f53c389406be1e2b1643f996c3d4cf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/83dc36c1c0d9c3009a399933d1eff6bd8f53c389406be1e2b1643f996c3d4cf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/83dc36c1c0d9c3009a399933d1eff6bd8f53c389406be1e2b1643f996c3d4cf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-647907",
	                "Source": "/var/lib/docker/volumes/addons-647907/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-647907",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-647907",
	                "name.minikube.sigs.k8s.io": "addons-647907",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9c97e8ca3578b37bdb189bf9fb2db7d72212c54939b4e96b709ed6d6d896a380",
	            "SandboxKey": "/var/run/docker/netns/9c97e8ca3578",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-647907": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:03:79:93:d7:24",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bdfb6399110fcb1822ff82a0a9ddcac2babaa574ec38cda228f6cd8fcca07e1e",
	                    "EndpointID": "b14643e12552a796ea7b17975b1a69233b18762ba658c083cf2b24865e9b3bb9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-647907",
	                        "72292e2fa4c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-647907 -n addons-647907
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-647907 logs -n 25: (1.404077709s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-756094 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-756094   │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ delete  │ -p download-only-756094                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-756094   │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ start   │ -o=json --download-only -p download-only-949930 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-949930   │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ delete  │ -p download-only-949930                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-949930   │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ delete  │ -p download-only-756094                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-756094   │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ delete  │ -p download-only-949930                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-949930   │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ start   │ --download-only -p download-docker-367583 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-367583 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ delete  │ -p download-docker-367583                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-367583 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ start   │ --download-only -p binary-mirror-147804 --alsologtostderr --binary-mirror http://127.0.0.1:44105 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-147804   │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ delete  │ -p binary-mirror-147804                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-147804   │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ addons  │ enable dashboard -p addons-647907                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ addons  │ disable dashboard -p addons-647907                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ start   │ -p addons-647907 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-647907 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-647907 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ addons  │ enable headlamp -p addons-647907 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-647907          │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:14:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:14:18.959254    5361 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:14:18.959722    5361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:18.959778    5361 out.go:374] Setting ErrFile to fd 2...
	I1124 13:14:18.959798    5361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:18.960070    5361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:14:18.960550    5361 out.go:368] Setting JSON to false
	I1124 13:14:18.961281    5361 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3410,"bootTime":1763986649,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 13:14:18.961368    5361 start.go:143] virtualization:  
	I1124 13:14:18.964751    5361 out.go:179] * [addons-647907] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 13:14:18.968490    5361 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:14:18.968559    5361 notify.go:221] Checking for updates...
	I1124 13:14:18.974110    5361 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:14:18.976954    5361 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 13:14:18.979743    5361 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 13:14:18.982412    5361 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 13:14:18.985229    5361 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:14:18.988213    5361 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:14:19.017235    5361 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:14:19.017350    5361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:14:19.076565    5361 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 13:14:19.067653266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:14:19.076669    5361 docker.go:319] overlay module found
	I1124 13:14:19.079841    5361 out.go:179] * Using the docker driver based on user configuration
	I1124 13:14:19.082761    5361 start.go:309] selected driver: docker
	I1124 13:14:19.082786    5361 start.go:927] validating driver "docker" against <nil>
	I1124 13:14:19.082813    5361 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:14:19.083663    5361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:14:19.135898    5361 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 13:14:19.126582675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:14:19.136061    5361 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:14:19.136280    5361 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:14:19.139372    5361 out.go:179] * Using Docker driver with root privileges
	I1124 13:14:19.142408    5361 cni.go:84] Creating CNI manager for ""
	I1124 13:14:19.142478    5361 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:14:19.142490    5361 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:14:19.142579    5361 start.go:353] cluster config:
	{Name:addons-647907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-647907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 13:14:19.145762    5361 out.go:179] * Starting "addons-647907" primary control-plane node in "addons-647907" cluster
	I1124 13:14:19.148630    5361 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:14:19.151553    5361 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:14:19.154399    5361 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:19.154447    5361 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 13:14:19.154461    5361 cache.go:65] Caching tarball of preloaded images
	I1124 13:14:19.154468    5361 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:14:19.154540    5361 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 13:14:19.154550    5361 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:14:19.154890    5361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/config.json ...
	I1124 13:14:19.154920    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/config.json: {Name:mkcdf77b8bac65501405fa44a8ac6bcb96bb5594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:19.170121    5361 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:14:19.170252    5361 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 13:14:19.170270    5361 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 13:14:19.170275    5361 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 13:14:19.170281    5361 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 13:14:19.170286    5361 cache.go:172] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1124 13:14:37.102225    5361 cache.go:174] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1124 13:14:37.102278    5361 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:14:37.102313    5361 start.go:360] acquireMachinesLock for addons-647907: {Name:mk166fce5dc7857652385b2817a2702b00f03887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:14:37.102437    5361 start.go:364] duration metric: took 99.955µs to acquireMachinesLock for "addons-647907"
	I1124 13:14:37.102471    5361 start.go:93] Provisioning new machine with config: &{Name:addons-647907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-647907 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:14:37.102541    5361 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:14:37.104284    5361 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 13:14:37.104518    5361 start.go:159] libmachine.API.Create for "addons-647907" (driver="docker")
	I1124 13:14:37.104553    5361 client.go:173] LocalClient.Create starting
	I1124 13:14:37.104663    5361 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem
	I1124 13:14:37.483211    5361 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem
	I1124 13:14:37.693071    5361 cli_runner.go:164] Run: docker network inspect addons-647907 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:14:37.709134    5361 cli_runner.go:211] docker network inspect addons-647907 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:14:37.709223    5361 network_create.go:284] running [docker network inspect addons-647907] to gather additional debugging logs...
	I1124 13:14:37.709245    5361 cli_runner.go:164] Run: docker network inspect addons-647907
	W1124 13:14:37.725409    5361 cli_runner.go:211] docker network inspect addons-647907 returned with exit code 1
	I1124 13:14:37.725439    5361 network_create.go:287] error running [docker network inspect addons-647907]: docker network inspect addons-647907: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-647907 not found
	I1124 13:14:37.725457    5361 network_create.go:289] output of [docker network inspect addons-647907]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-647907 not found
	
	** /stderr **
	I1124 13:14:37.725559    5361 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:14:37.741545    5361 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191b650}
	I1124 13:14:37.741587    5361 network_create.go:124] attempt to create docker network addons-647907 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 13:14:37.741646    5361 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-647907 addons-647907
	I1124 13:14:37.793309    5361 network_create.go:108] docker network addons-647907 192.168.49.0/24 created
	I1124 13:14:37.793343    5361 kic.go:121] calculated static IP "192.168.49.2" for the "addons-647907" container
	I1124 13:14:37.793411    5361 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:14:37.809413    5361 cli_runner.go:164] Run: docker volume create addons-647907 --label name.minikube.sigs.k8s.io=addons-647907 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:14:37.826744    5361 oci.go:103] Successfully created a docker volume addons-647907
	I1124 13:14:37.826828    5361 cli_runner.go:164] Run: docker run --rm --name addons-647907-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-647907 --entrypoint /usr/bin/test -v addons-647907:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:14:40.269681    5361 cli_runner.go:217] Completed: docker run --rm --name addons-647907-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-647907 --entrypoint /usr/bin/test -v addons-647907:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.442793106s)
	I1124 13:14:40.269707    5361 oci.go:107] Successfully prepared a docker volume addons-647907
	I1124 13:14:40.269752    5361 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:40.269763    5361 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:14:40.269880    5361 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-647907:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:14:44.727168    5361 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-647907:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.457246814s)
	I1124 13:14:44.727217    5361 kic.go:203] duration metric: took 4.457434639s to extract preloaded images to volume ...
	W1124 13:14:44.727348    5361 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 13:14:44.727486    5361 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:14:44.780894    5361 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-647907 --name addons-647907 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-647907 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-647907 --network addons-647907 --ip 192.168.49.2 --volume addons-647907:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:14:45.269267    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Running}}
	I1124 13:14:45.297647    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:14:45.329992    5361 cli_runner.go:164] Run: docker exec addons-647907 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:14:45.387840    5361 oci.go:144] the created container "addons-647907" has a running status.
	I1124 13:14:45.387865    5361 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa...
	I1124 13:14:45.475406    5361 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:14:45.495076    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:14:45.514600    5361 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:14:45.514618    5361 kic_runner.go:114] Args: [docker exec --privileged addons-647907 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:14:45.559844    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:14:45.580212    5361 machine.go:94] provisionDockerMachine start ...
	I1124 13:14:45.580293    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:45.601268    5361 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:45.601612    5361 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 13:14:45.601621    5361 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:14:45.602293    5361 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 13:14:48.754864    5361 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-647907
	
	I1124 13:14:48.754888    5361 ubuntu.go:182] provisioning hostname "addons-647907"
	I1124 13:14:48.754951    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:48.773898    5361 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:48.774218    5361 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 13:14:48.774236    5361 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-647907 && echo "addons-647907" | sudo tee /etc/hostname
	I1124 13:14:48.936633    5361 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-647907
	
	I1124 13:14:48.936711    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:48.956691    5361 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:48.957005    5361 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 13:14:48.957027    5361 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-647907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-647907/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-647907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:14:49.107613    5361 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:14:49.107636    5361 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 13:14:49.107665    5361 ubuntu.go:190] setting up certificates
	I1124 13:14:49.107674    5361 provision.go:84] configureAuth start
	I1124 13:14:49.107734    5361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-647907
	I1124 13:14:49.130873    5361 provision.go:143] copyHostCerts
	I1124 13:14:49.130956    5361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 13:14:49.131086    5361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 13:14:49.131159    5361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 13:14:49.131220    5361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.addons-647907 san=[127.0.0.1 192.168.49.2 addons-647907 localhost minikube]
	I1124 13:14:49.224885    5361 provision.go:177] copyRemoteCerts
	I1124 13:14:49.224948    5361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:14:49.224989    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:49.241343    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:14:49.346887    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:14:49.363979    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 13:14:49.381466    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 13:14:49.398926    5361 provision.go:87] duration metric: took 291.226079ms to configureAuth
	I1124 13:14:49.398957    5361 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:14:49.399177    5361 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:14:49.399289    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:49.416778    5361 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:49.417089    5361 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1124 13:14:49.417109    5361 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:14:49.723453    5361 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:14:49.723488    5361 machine.go:97] duration metric: took 4.143255809s to provisionDockerMachine
	I1124 13:14:49.723498    5361 client.go:176] duration metric: took 12.618936375s to LocalClient.Create
	I1124 13:14:49.723514    5361 start.go:167] duration metric: took 12.618997667s to libmachine.API.Create "addons-647907"
	I1124 13:14:49.723522    5361 start.go:293] postStartSetup for "addons-647907" (driver="docker")
	I1124 13:14:49.723532    5361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:14:49.723655    5361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:14:49.723733    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:49.742263    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:14:49.847478    5361 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:14:49.850844    5361 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:14:49.850872    5361 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:14:49.850884    5361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 13:14:49.850950    5361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 13:14:49.850984    5361 start.go:296] duration metric: took 127.455859ms for postStartSetup
	I1124 13:14:49.851302    5361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-647907
	I1124 13:14:49.868940    5361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/config.json ...
	I1124 13:14:49.869230    5361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:14:49.869279    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:49.886217    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:14:49.988329    5361 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:14:49.992862    5361 start.go:128] duration metric: took 12.890306522s to createHost
	I1124 13:14:49.992890    5361 start.go:83] releasing machines lock for "addons-647907", held for 12.890440423s
	I1124 13:14:49.993236    5361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-647907
	I1124 13:14:50.016970    5361 ssh_runner.go:195] Run: cat /version.json
	I1124 13:14:50.017028    5361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:14:50.017030    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:50.017090    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:14:50.048603    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:14:50.049035    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:14:50.151107    5361 ssh_runner.go:195] Run: systemctl --version
	I1124 13:14:50.241651    5361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:14:50.275847    5361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:14:50.280180    5361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:14:50.280279    5361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:14:50.309661    5361 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 13:14:50.309685    5361 start.go:496] detecting cgroup driver to use...
	I1124 13:14:50.309718    5361 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 13:14:50.309767    5361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:14:50.327183    5361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:14:50.339968    5361 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:14:50.340032    5361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:14:50.357604    5361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:14:50.376662    5361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:14:50.505551    5361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:14:50.637075    5361 docker.go:234] disabling docker service ...
	I1124 13:14:50.637163    5361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:14:50.659436    5361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:14:50.672412    5361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:14:50.786818    5361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:14:50.901557    5361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:14:50.915929    5361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:14:50.930164    5361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 13:14:50.930277    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.939036    5361 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 13:14:50.939156    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.947861    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.956829    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.965530    5361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:14:50.973365    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.981798    5361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:50.995161    5361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:51.006604    5361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:14:51.015179    5361 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 13:14:51.015250    5361 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 13:14:51.029473    5361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:14:51.037855    5361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:14:51.161247    5361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:14:51.350601    5361 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:14:51.350742    5361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:14:51.354396    5361 start.go:564] Will wait 60s for crictl version
	I1124 13:14:51.354476    5361 ssh_runner.go:195] Run: which crictl
	I1124 13:14:51.357837    5361 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:14:51.384515    5361 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:14:51.384655    5361 ssh_runner.go:195] Run: crio --version
	I1124 13:14:51.413563    5361 ssh_runner.go:195] Run: crio --version
	I1124 13:14:51.446075    5361 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 13:14:51.449013    5361 cli_runner.go:164] Run: docker network inspect addons-647907 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:14:51.465725    5361 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 13:14:51.469639    5361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:14:51.479611    5361 kubeadm.go:884] updating cluster {Name:addons-647907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-647907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:14:51.479744    5361 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:51.479808    5361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:14:51.516483    5361 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:14:51.516508    5361 crio.go:433] Images already preloaded, skipping extraction
	I1124 13:14:51.516568    5361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:14:51.541994    5361 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:14:51.542018    5361 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:14:51.542026    5361 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1124 13:14:51.542114    5361 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-647907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-647907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:14:51.542197    5361 ssh_runner.go:195] Run: crio config
	I1124 13:14:51.597123    5361 cni.go:84] Creating CNI manager for ""
	I1124 13:14:51.597146    5361 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:14:51.597160    5361 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:14:51.597211    5361 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-647907 NodeName:addons-647907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:14:51.597375    5361 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-647907"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:14:51.597452    5361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:14:51.605049    5361 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:14:51.605118    5361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:14:51.612701    5361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 13:14:51.625821    5361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:14:51.639336    5361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1124 13:14:51.652148    5361 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:14:51.655705    5361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:14:51.665681    5361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:14:51.787413    5361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:14:51.802639    5361 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907 for IP: 192.168.49.2
	I1124 13:14:51.802702    5361 certs.go:195] generating shared ca certs ...
	I1124 13:14:51.802732    5361 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:51.802894    5361 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 13:14:52.077328    5361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt ...
	I1124 13:14:52.077364    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt: {Name:mk006816f465c2c5820b705b1ef87c191af5a66e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:52.077575    5361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key ...
	I1124 13:14:52.077589    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key: {Name:mk8a6badd37b65193516f56d8210e821ef116a99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:52.077672    5361 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 13:14:52.181626    5361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt ...
	I1124 13:14:52.181654    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt: {Name:mk2aadb170d054acc188db3efc8c5b2a6b5be842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:52.181824    5361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key ...
	I1124 13:14:52.181836    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key: {Name:mkec5e104c9dfb79f73922b399b39c84b56e6d28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:52.181917    5361 certs.go:257] generating profile certs ...
	I1124 13:14:52.181973    5361 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.key
	I1124 13:14:52.182000    5361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt with IP's: []
	I1124 13:14:53.121408    5361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt ...
	I1124 13:14:53.121443    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: {Name:mk74580fd15c6a031e8b356a42dbab7d3066e438 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.121629    5361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.key ...
	I1124 13:14:53.121643    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.key: {Name:mkab3610f80ed8c7989d7d5ffeb6775d895097f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.121768    5361 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key.a92d6616
	I1124 13:14:53.121789    5361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt.a92d6616 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 13:14:53.449274    5361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt.a92d6616 ...
	I1124 13:14:53.449305    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt.a92d6616: {Name:mkd81e0b0450ff367c0d93f5d17a46e135930fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.449481    5361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key.a92d6616 ...
	I1124 13:14:53.449497    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key.a92d6616: {Name:mka149af667e8c33bbed2aab6934fd007fa6a659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.449614    5361 certs.go:382] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt.a92d6616 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt
	I1124 13:14:53.449701    5361 certs.go:386] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key.a92d6616 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key
	I1124 13:14:53.449761    5361 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.key
	I1124 13:14:53.449781    5361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.crt with IP's: []
	I1124 13:14:53.695377    5361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.crt ...
	I1124 13:14:53.695407    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.crt: {Name:mk1a0ddb087f9b67d6bc4e2c19e0cb9f4b734f49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.695588    5361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.key ...
	I1124 13:14:53.695600    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.key: {Name:mked4da12ada2dc6b0d187dfeebc752f49dd0053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:53.695807    5361 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:14:53.695853    5361 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:14:53.695885    5361 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:14:53.695923    5361 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 13:14:53.696552    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:14:53.715084    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 13:14:53.733215    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:14:53.751055    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:14:53.769703    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 13:14:53.786821    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 13:14:53.804768    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:14:53.822387    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 13:14:53.840015    5361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:14:53.859764    5361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:14:53.874245    5361 ssh_runner.go:195] Run: openssl version
	I1124 13:14:53.880665    5361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:14:53.889831    5361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:53.893757    5361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:53.893849    5361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:53.935116    5361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:14:53.943891    5361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:14:53.947601    5361 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:14:53.947659    5361 kubeadm.go:401] StartCluster: {Name:addons-647907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-647907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:14:53.947752    5361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:14:53.947810    5361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:14:53.976056    5361 cri.go:89] found id: ""
	I1124 13:14:53.976148    5361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:14:53.983955    5361 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:14:53.991864    5361 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:14:53.991968    5361 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:14:54.000759    5361 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:14:54.000777    5361 kubeadm.go:158] found existing configuration files:
	
	I1124 13:14:54.000839    5361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:14:54.011741    5361 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:14:54.011842    5361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:14:54.019938    5361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:14:54.028334    5361 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:14:54.028480    5361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:14:54.036364    5361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:14:54.044147    5361 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:14:54.044258    5361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:14:54.051331    5361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:14:54.059108    5361 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:14:54.059173    5361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:14:54.066691    5361 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:14:54.105866    5361 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:14:54.106031    5361 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:14:54.140095    5361 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:14:54.140169    5361 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 13:14:54.140209    5361 kubeadm.go:319] OS: Linux
	I1124 13:14:54.140258    5361 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:14:54.140316    5361 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 13:14:54.140378    5361 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:14:54.140430    5361 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:14:54.140482    5361 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:14:54.140533    5361 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:14:54.140587    5361 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:14:54.140638    5361 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:14:54.140688    5361 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 13:14:54.219931    5361 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:14:54.220045    5361 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:14:54.220142    5361 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:14:54.227921    5361 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:14:54.231273    5361 out.go:252]   - Generating certificates and keys ...
	I1124 13:14:54.231441    5361 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:14:54.231564    5361 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:14:54.424452    5361 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:14:54.567937    5361 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:14:54.760763    5361 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:14:56.007671    5361 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:14:56.720997    5361 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:14:56.721385    5361 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-647907 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 13:14:56.944283    5361 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:14:56.944619    5361 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-647907 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 13:14:57.122470    5361 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:14:57.307920    5361 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:14:58.732174    5361 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:14:58.732595    5361 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:14:58.868347    5361 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:14:59.560600    5361 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:14:59.787797    5361 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:14:59.933101    5361 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:15:00.204718    5361 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:15:00.204955    5361 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:15:00.205028    5361 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:15:00.233308    5361 out.go:252]   - Booting up control plane ...
	I1124 13:15:00.233413    5361 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:15:00.233492    5361 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:15:00.233561    5361 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:15:00.233665    5361 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:15:00.233759    5361 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 13:15:00.245353    5361 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 13:15:00.260773    5361 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:15:00.260903    5361 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:15:00.504750    5361 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 13:15:00.504869    5361 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 13:15:01.504539    5361 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001092887s
	I1124 13:15:01.508207    5361 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 13:15:01.508299    5361 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 13:15:01.508421    5361 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 13:15:01.508496    5361 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 13:15:05.756080    5361 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.247178313s
	I1124 13:15:06.112181    5361 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.603944185s
	I1124 13:15:08.010035    5361 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501610716s
	I1124 13:15:08.035110    5361 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:15:08.051648    5361 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:15:08.067676    5361 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:15:08.067874    5361 kubeadm.go:319] [mark-control-plane] Marking the node addons-647907 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:15:08.081728    5361 kubeadm.go:319] [bootstrap-token] Using token: bbiljv.00xeqiejrgdkivim
	I1124 13:15:08.084913    5361 out.go:252]   - Configuring RBAC rules ...
	I1124 13:15:08.085052    5361 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:15:08.095628    5361 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:15:08.105344    5361 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:15:08.110466    5361 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:15:08.115893    5361 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:15:08.121252    5361 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:15:08.419431    5361 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:15:08.858608    5361 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:15:09.419140    5361 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:15:09.420327    5361 kubeadm.go:319] 
	I1124 13:15:09.420415    5361 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:15:09.420425    5361 kubeadm.go:319] 
	I1124 13:15:09.420515    5361 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:15:09.420524    5361 kubeadm.go:319] 
	I1124 13:15:09.420553    5361 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:15:09.420621    5361 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:15:09.420686    5361 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:15:09.420693    5361 kubeadm.go:319] 
	I1124 13:15:09.420750    5361 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:15:09.420758    5361 kubeadm.go:319] 
	I1124 13:15:09.420809    5361 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:15:09.420817    5361 kubeadm.go:319] 
	I1124 13:15:09.420875    5361 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:15:09.420959    5361 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:15:09.421059    5361 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:15:09.421069    5361 kubeadm.go:319] 
	I1124 13:15:09.421173    5361 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:15:09.421260    5361 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:15:09.421267    5361 kubeadm.go:319] 
	I1124 13:15:09.421351    5361 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bbiljv.00xeqiejrgdkivim \
	I1124 13:15:09.421458    5361 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f \
	I1124 13:15:09.421483    5361 kubeadm.go:319] 	--control-plane 
	I1124 13:15:09.421498    5361 kubeadm.go:319] 
	I1124 13:15:09.421593    5361 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:15:09.421601    5361 kubeadm.go:319] 
	I1124 13:15:09.421685    5361 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bbiljv.00xeqiejrgdkivim \
	I1124 13:15:09.421795    5361 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f 
	I1124 13:15:09.425415    5361 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 13:15:09.425649    5361 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 13:15:09.425759    5361 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:15:09.425778    5361 cni.go:84] Creating CNI manager for ""
	I1124 13:15:09.425793    5361 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:15:09.428982    5361 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:15:09.432025    5361 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:15:09.436341    5361 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:15:09.436378    5361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:15:09.451139    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:15:09.744097    5361 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:15:09.744248    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:09.744329    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-647907 minikube.k8s.io/updated_at=2025_11_24T13_15_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=addons-647907 minikube.k8s.io/primary=true
	I1124 13:15:09.903276    5361 ops.go:34] apiserver oom_adj: -16
	I1124 13:15:09.903426    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:10.403788    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:10.903501    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:11.404218    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:11.904490    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:12.403517    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:12.903974    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:13.403509    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:13.903526    5361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:14.052237    5361 kubeadm.go:1114] duration metric: took 4.308032259s to wait for elevateKubeSystemPrivileges
	I1124 13:15:14.052264    5361 kubeadm.go:403] duration metric: took 20.10460896s to StartCluster
	I1124 13:15:14.052280    5361 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:14.052409    5361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 13:15:14.052810    5361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:14.053003    5361 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:15:14.053150    5361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:15:14.053417    5361 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:15:14.053456    5361 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 13:15:14.053531    5361 addons.go:70] Setting yakd=true in profile "addons-647907"
	I1124 13:15:14.053546    5361 addons.go:239] Setting addon yakd=true in "addons-647907"
	I1124 13:15:14.053568    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.054073    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.054375    5361 addons.go:70] Setting inspektor-gadget=true in profile "addons-647907"
	I1124 13:15:14.054389    5361 addons.go:239] Setting addon inspektor-gadget=true in "addons-647907"
	I1124 13:15:14.054412    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.054821    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.055167    5361 addons.go:70] Setting metrics-server=true in profile "addons-647907"
	I1124 13:15:14.055194    5361 addons.go:239] Setting addon metrics-server=true in "addons-647907"
	I1124 13:15:14.055225    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.055679    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.059477    5361 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-647907"
	I1124 13:15:14.059553    5361 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-647907"
	I1124 13:15:14.059601    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.060110    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.060316    5361 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-647907"
	I1124 13:15:14.060365    5361 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-647907"
	I1124 13:15:14.060402    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.060827    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.067023    5361 addons.go:70] Setting cloud-spanner=true in profile "addons-647907"
	I1124 13:15:14.067049    5361 addons.go:70] Setting registry-creds=true in profile "addons-647907"
	I1124 13:15:14.067064    5361 addons.go:239] Setting addon cloud-spanner=true in "addons-647907"
	I1124 13:15:14.067068    5361 addons.go:239] Setting addon registry-creds=true in "addons-647907"
	I1124 13:15:14.067099    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.067028    5361 addons.go:70] Setting registry=true in profile "addons-647907"
	I1124 13:15:14.067111    5361 addons.go:239] Setting addon registry=true in "addons-647907"
	I1124 13:15:14.067125    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.067649    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.067908    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.077312    5361 addons.go:70] Setting storage-provisioner=true in profile "addons-647907"
	I1124 13:15:14.077358    5361 addons.go:239] Setting addon storage-provisioner=true in "addons-647907"
	I1124 13:15:14.077396    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.077878    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.080428    5361 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-647907"
	I1124 13:15:14.080503    5361 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-647907"
	I1124 13:15:14.080536    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.081005    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.095100    5361 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-647907"
	I1124 13:15:14.095134    5361 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-647907"
	I1124 13:15:14.095494    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.108330    5361 addons.go:70] Setting default-storageclass=true in profile "addons-647907"
	I1124 13:15:14.108371    5361 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-647907"
	I1124 13:15:14.108699    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.123797    5361 addons.go:70] Setting gcp-auth=true in profile "addons-647907"
	I1124 13:15:14.123832    5361 mustload.go:66] Loading cluster: addons-647907
	I1124 13:15:14.124321    5361 addons.go:70] Setting volcano=true in profile "addons-647907"
	I1124 13:15:14.124352    5361 addons.go:239] Setting addon volcano=true in "addons-647907"
	I1124 13:15:14.124400    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.124607    5361 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:15:14.124885    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.128335    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.147401    5361 addons.go:70] Setting ingress=true in profile "addons-647907"
	I1124 13:15:14.147429    5361 addons.go:239] Setting addon ingress=true in "addons-647907"
	I1124 13:15:14.147490    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.148724    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.170988    5361 addons.go:70] Setting volumesnapshots=true in profile "addons-647907"
	I1124 13:15:14.171025    5361 addons.go:239] Setting addon volumesnapshots=true in "addons-647907"
	I1124 13:15:14.171061    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.171570    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.175996    5361 addons.go:70] Setting ingress-dns=true in profile "addons-647907"
	I1124 13:15:14.176025    5361 addons.go:239] Setting addon ingress-dns=true in "addons-647907"
	I1124 13:15:14.176065    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.176570    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.199452    5361 out.go:179] * Verifying Kubernetes components...
	I1124 13:15:14.067099    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.200261    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.207700    5361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:15:14.302029    5361 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 13:15:14.302301    5361 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 13:15:14.302345    5361 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 13:15:14.337839    5361 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 13:15:14.337855    5361 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 13:15:14.337937    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.338498    5361 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 13:15:14.339414    5361 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 13:15:14.339440    5361 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 13:15:14.339517    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.364548    5361 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 13:15:14.364575    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 13:15:14.364639    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.369629    5361 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 13:15:14.372812    5361 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 13:15:14.372885    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 13:15:14.372968    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.389970    5361 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 13:15:14.389990    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 13:15:14.390046    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.401802    5361 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	W1124 13:15:14.402828    5361 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 13:15:14.404572    5361 addons.go:239] Setting addon default-storageclass=true in "addons-647907"
	I1124 13:15:14.407627    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.408053    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.404599    5361 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 13:15:14.405286    5361 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-647907"
	I1124 13:15:14.408581    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.408981    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:14.438212    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 13:15:14.443044    5361 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 13:15:14.451301    5361 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 13:15:14.451332    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 13:15:14.451429    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.451661    5361 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:15:14.458520    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:14.460102    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 13:15:14.464268    5361 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 13:15:14.464288    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 13:15:14.464376    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.489336    5361 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:15:14.489359    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:15:14.489418    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.496481    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 13:15:14.501668    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 13:15:14.504702    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 13:15:14.508131    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 13:15:14.510475    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 13:15:14.514751    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 13:15:14.514784    5361 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 13:15:14.514847    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.514993    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 13:15:14.518216    5361 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 13:15:14.522020    5361 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:15:14.533406    5361 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:15:14.539787    5361 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 13:15:14.540004    5361 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 13:15:14.540177    5361 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 13:15:14.579180    5361 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 13:15:14.579265    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 13:15:14.579407    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.580914    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.590912    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.591551    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.602042    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 13:15:14.602116    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 13:15:14.602224    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.623587    5361 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 13:15:14.623609    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 13:15:14.623669    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.632157    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.647170    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.647808    5361 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 13:15:14.647822    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 13:15:14.647893    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.673839    5361 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:15:14.673869    5361 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:15:14.673921    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.697298    5361 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 13:15:14.700300    5361 out.go:179]   - Using image docker.io/busybox:stable
	I1124 13:15:14.706314    5361 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 13:15:14.706343    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 13:15:14.706413    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:14.731476    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.737170    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.770912    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.775829    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.776948    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.798735    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.809319    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.809893    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	W1124 13:15:14.820313    5361 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 13:15:14.820420    5361 retry.go:31] will retry after 186.99618ms: ssh: handshake failed: EOF
	I1124 13:15:14.832821    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:14.835555    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	W1124 13:15:15.013465    5361 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 13:15:15.013549    5361 retry.go:31] will retry after 561.464476ms: ssh: handshake failed: EOF
	I1124 13:15:15.035418    5361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:15:15.035802    5361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:15:15.276497    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:15:15.305164    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:15:15.331878    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 13:15:15.356054    5361 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 13:15:15.356123    5361 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 13:15:15.358599    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 13:15:15.370716    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 13:15:15.377818    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 13:15:15.398990    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 13:15:15.414875    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 13:15:15.414947    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 13:15:15.472043    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 13:15:15.497878    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 13:15:15.510503    5361 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 13:15:15.510575    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 13:15:15.519330    5361 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 13:15:15.519403    5361 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 13:15:15.525540    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 13:15:15.532444    5361 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 13:15:15.532519    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 13:15:15.691376    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 13:15:15.691402    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 13:15:15.705230    5361 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 13:15:15.705261    5361 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 13:15:15.738045    5361 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 13:15:15.738070    5361 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 13:15:15.742207    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 13:15:15.853148    5361 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 13:15:15.853174    5361 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 13:15:15.859317    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 13:15:15.859393    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 13:15:15.882748    5361 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 13:15:15.882777    5361 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 13:15:16.035409    5361 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 13:15:16.035432    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 13:15:16.077607    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 13:15:16.077632    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 13:15:16.097756    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 13:15:16.245640    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 13:15:16.301538    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 13:15:16.301565    5361 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 13:15:16.365916    5361 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 13:15:16.365942    5361 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 13:15:16.553541    5361 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 13:15:16.553616    5361 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 13:15:16.600584    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 13:15:16.600653    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 13:15:16.676275    5361 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 13:15:16.676357    5361 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 13:15:16.906225    5361 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 13:15:16.906304    5361 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 13:15:16.957563    5361 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.922064095s)
	I1124 13:15:16.958331    5361 node_ready.go:35] waiting up to 6m0s for node "addons-647907" to be "Ready" ...
	I1124 13:15:16.958581    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.682053455s)
	I1124 13:15:16.958750    5361 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.922895766s)
	I1124 13:15:16.958789    5361 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 13:15:16.981946    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 13:15:16.982019    5361 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 13:15:17.156977    5361 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:15:17.157045    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 13:15:17.185169    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 13:15:17.185241    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 13:15:17.252769    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:15:17.273692    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 13:15:17.273762    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 13:15:17.455554    5361 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 13:15:17.455631    5361 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 13:15:17.478243    5361 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-647907" context rescaled to 1 replicas
	I1124 13:15:17.655241    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 13:15:18.630107    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.324861057s)
	I1124 13:15:18.630256    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.298299895s)
	I1124 13:15:18.630314    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.271649265s)
	I1124 13:15:18.630373    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.259600555s)
	W1124 13:15:18.968215    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:19.895371    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.517498125s)
	I1124 13:15:19.895455    5361 addons.go:495] Verifying addon ingress=true in "addons-647907"
	I1124 13:15:19.895657    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.496597533s)
	I1124 13:15:19.895883    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.423768163s)
	I1124 13:15:19.895917    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.397972703s)
	I1124 13:15:19.895985    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.370386588s)
	I1124 13:15:19.896011    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.153780143s)
	I1124 13:15:19.896017    5361 addons.go:495] Verifying addon registry=true in "addons-647907"
	I1124 13:15:19.896247    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.798464587s)
	I1124 13:15:19.896418    5361 addons.go:495] Verifying addon metrics-server=true in "addons-647907"
	I1124 13:15:19.896488    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.650819434s)
	I1124 13:15:19.899433    5361 out.go:179] * Verifying registry addon...
	I1124 13:15:19.899512    5361 out.go:179] * Verifying ingress addon...
	I1124 13:15:19.899552    5361 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-647907 service yakd-dashboard -n yakd-dashboard
	
	I1124 13:15:19.903901    5361 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 13:15:19.904763    5361 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 13:15:19.942026    5361 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 13:15:19.942046    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:19.942073    5361 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 13:15:19.942084    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:20.110282    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.857424147s)
	W1124 13:15:20.110320    5361 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 13:15:20.110364    5361 retry.go:31] will retry after 181.086691ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 13:15:20.291984    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:15:20.374698    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.719355747s)
	I1124 13:15:20.374735    5361 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-647907"
	I1124 13:15:20.377998    5361 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 13:15:20.381576    5361 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 13:15:20.395322    5361 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 13:15:20.395346    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:20.496480    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:20.496610    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:20.885591    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:20.908633    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:20.909399    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:21.385726    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:21.408094    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:21.408436    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:21.462390    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:21.885253    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:21.908665    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:21.908715    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:22.070519    5361 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 13:15:22.070605    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:22.088164    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:22.205216    5361 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 13:15:22.217381    5361 addons.go:239] Setting addon gcp-auth=true in "addons-647907"
	I1124 13:15:22.217437    5361 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:15:22.217879    5361 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:15:22.234571    5361 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 13:15:22.234623    5361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:15:22.252244    5361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:15:22.385413    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:22.406979    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:22.407316    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:22.885585    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:22.910598    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:22.912674    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:23.116587    5361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.824552839s)
	I1124 13:15:23.119808    5361 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:15:23.122761    5361 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 13:15:23.125534    5361 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 13:15:23.125554    5361 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 13:15:23.138119    5361 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 13:15:23.138178    5361 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 13:15:23.151188    5361 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 13:15:23.151252    5361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 13:15:23.163134    5361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 13:15:23.385933    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:23.407627    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:23.409283    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:23.625159    5361 addons.go:495] Verifying addon gcp-auth=true in "addons-647907"
	I1124 13:15:23.628312    5361 out.go:179] * Verifying gcp-auth addon...
	I1124 13:15:23.631820    5361 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 13:15:23.641236    5361 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 13:15:23.641306    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:23.885817    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:23.907993    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:23.908063    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1124 13:15:23.961969    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:24.134943    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:24.384803    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:24.406721    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:24.409324    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:24.635216    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:24.885684    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:24.908098    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:24.908243    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:25.134945    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:25.384950    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:25.406921    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:25.407683    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:25.635432    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:25.884379    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:25.907591    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:25.908851    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:26.135345    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:26.385333    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:26.406773    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:26.407678    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:26.461344    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:26.635523    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:26.884682    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:26.907670    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:26.907816    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:27.134946    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:27.384904    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:27.406820    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:27.407583    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:27.635149    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:27.885344    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:27.907981    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:27.908922    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:28.135313    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:28.385296    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:28.407272    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:28.408031    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:28.461617    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:28.635559    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:28.884662    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:28.907668    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:28.907803    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:29.135399    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:29.385184    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:29.406692    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:29.407755    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:29.635139    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:29.884978    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:29.907248    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:29.907895    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:30.135470    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:30.384235    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:30.407075    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:30.407424    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:30.635452    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:30.885560    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:30.907399    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:30.907708    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:30.961448    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:31.135470    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:31.385196    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:31.406865    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:31.408076    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:31.634524    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:31.885186    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:31.907605    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:31.908156    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:32.135378    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:32.384307    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:32.406793    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:32.407644    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:32.635146    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:32.887701    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:32.908345    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:32.908542    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:33.135129    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:33.384713    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:33.407970    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:33.408063    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:33.461658    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:33.635408    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:33.885243    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:33.908388    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:33.908942    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:34.135431    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:34.385312    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:34.406721    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:34.407926    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:34.635426    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:34.885799    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:34.907799    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:34.908061    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:35.136248    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:35.385254    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:35.407135    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:35.408856    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:35.635396    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:35.884279    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:35.908650    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:35.908785    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:35.961432    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:36.135317    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:36.385137    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:36.406775    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:36.407805    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:36.635689    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:36.884617    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:36.907888    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:36.908372    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:37.134814    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:37.384598    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:37.408479    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:37.408911    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:37.634946    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:37.884857    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:37.907110    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:37.908493    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:38.135226    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:38.385152    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:38.407624    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:38.408090    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:38.461727    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:38.635317    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:38.884582    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:38.908783    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:38.908949    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:39.134691    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:39.385096    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:39.406854    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:39.407268    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:39.634588    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:39.884328    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:39.906861    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:39.908214    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:40.135549    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:40.384316    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:40.407910    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:40.408075    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:40.635235    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:40.885837    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:40.907117    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:40.907600    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:40.961409    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:41.135026    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:41.385148    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:41.406962    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:41.409502    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:41.634732    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:41.884246    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:41.907196    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:41.908846    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:42.136671    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:42.384712    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:42.407791    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:42.407928    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:42.635313    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:42.886071    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:42.908549    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:42.916324    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:42.963090    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:43.135145    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:43.385192    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:43.406733    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:43.407877    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:43.635175    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:43.885028    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:43.908263    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:43.908380    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:44.135075    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:44.385429    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:44.407295    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:44.408690    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:44.635213    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:44.884906    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:44.906868    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:44.908088    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:45.145899    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:45.385095    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:45.407336    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:45.407471    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1124 13:15:45.461049    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:45.635022    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:45.885012    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:45.907859    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:45.908889    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:46.135432    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:46.384468    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:46.408588    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:46.409427    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:46.635121    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:46.885669    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:46.908075    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:46.908409    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:47.135679    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:47.384968    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:47.406787    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:47.407239    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:47.461901    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:47.635150    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:47.885311    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:47.908853    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:47.909239    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:48.134719    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:48.384815    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:48.408553    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:48.408849    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:48.635066    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:48.884981    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:48.908187    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:48.908472    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:49.134823    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:49.384890    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:49.406768    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:49.407524    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:49.635017    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:49.890230    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:49.907496    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:49.908771    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:49.961505    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:50.135505    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:50.385406    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:50.407673    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:50.407826    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:50.635532    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:50.884180    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:50.907440    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:50.908703    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:51.135466    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:51.384553    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:51.407738    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:51.408015    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:51.634770    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:51.884792    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:51.907456    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:51.907974    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:51.961684    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:52.135493    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:52.384331    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:52.406971    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:52.407886    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:52.634445    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:52.885210    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:52.906953    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:52.909489    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:53.134947    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:53.385055    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:53.407599    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:53.408029    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:53.634574    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:53.885539    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:53.907576    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:53.907730    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:54.135455    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:54.385156    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:54.406970    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:54.407587    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:54.462420    5361 node_ready.go:57] node "addons-647907" has "Ready":"False" status (will retry)
	I1124 13:15:54.635281    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:54.905066    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:54.923691    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:54.924098    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:54.985496    5361 node_ready.go:49] node "addons-647907" is "Ready"
	I1124 13:15:54.985529    5361 node_ready.go:38] duration metric: took 38.027131264s for node "addons-647907" to be "Ready" ...
	I1124 13:15:54.985544    5361 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:15:54.985608    5361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:15:55.007708    5361 api_server.go:72] duration metric: took 40.954676167s to wait for apiserver process to appear ...
	I1124 13:15:55.007743    5361 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:15:55.007786    5361 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 13:15:55.032974    5361 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 13:15:55.035570    5361 api_server.go:141] control plane version: v1.34.1
	I1124 13:15:55.035693    5361 api_server.go:131] duration metric: took 27.940527ms to wait for apiserver health ...
	I1124 13:15:55.035722    5361 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:15:55.056401    5361 system_pods.go:59] 19 kube-system pods found
	I1124 13:15:55.056484    5361 system_pods.go:61] "coredns-66bc5c9577-hhndw" [65c642fe-4303-4dda-b00c-9892237a778c] Pending
	I1124 13:15:55.056506    5361 system_pods.go:61] "csi-hostpath-attacher-0" [71395414-4872-496e-9036-28fa99a9b658] Pending
	I1124 13:15:55.056526    5361 system_pods.go:61] "csi-hostpath-resizer-0" [279dab72-076d-46b9-90f4-1d69768633da] Pending
	I1124 13:15:55.056563    5361 system_pods.go:61] "csi-hostpathplugin-89nqp" [933c89cf-7b06-45b0-a9b4-be0b5f3b3bde] Pending
	I1124 13:15:55.056583    5361 system_pods.go:61] "etcd-addons-647907" [876e6603-49e6-44f6-9331-ed2a53b6657b] Running
	I1124 13:15:55.056603    5361 system_pods.go:61] "kindnet-cq7x5" [d7365392-f359-4f14-93e6-ae671834849d] Running
	I1124 13:15:55.056636    5361 system_pods.go:61] "kube-apiserver-addons-647907" [af27e5b5-2033-4c23-95d2-39ec0194646d] Running
	I1124 13:15:55.056662    5361 system_pods.go:61] "kube-controller-manager-addons-647907" [132a9fce-2239-4973-a9e3-3af445872d21] Running
	I1124 13:15:55.056686    5361 system_pods.go:61] "kube-ingress-dns-minikube" [8bade7cb-4229-4fda-9216-6adefd2a920f] Pending
	I1124 13:15:55.056721    5361 system_pods.go:61] "kube-proxy-n8mpw" [56957e05-4c86-4594-937e-b547b9ffdb86] Running
	I1124 13:15:55.056743    5361 system_pods.go:61] "kube-scheduler-addons-647907" [e8469bdd-2a1b-4d5a-932b-866e717756f2] Running
	I1124 13:15:55.056764    5361 system_pods.go:61] "metrics-server-85b7d694d7-xlsnf" [1ef721be-841d-4a36-949e-55c1b518e346] Pending
	I1124 13:15:55.056800    5361 system_pods.go:61] "nvidia-device-plugin-daemonset-dn469" [983203db-3631-4a06-80b8-418beda496e4] Pending
	I1124 13:15:55.056823    5361 system_pods.go:61] "registry-6b586f9694-2l7kt" [9c4c505b-2065-4059-a0a8-66ae39acca38] Pending
	I1124 13:15:55.056843    5361 system_pods.go:61] "registry-creds-764b6fb674-pf26d" [eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6] Pending
	I1124 13:15:55.056876    5361 system_pods.go:61] "registry-proxy-9hgsb" [fe9ffd77-b342-49e0-b2bb-e1634ce62247] Pending
	I1124 13:15:55.056899    5361 system_pods.go:61] "snapshot-controller-7d9fbc56b8-49v4w" [dc47a8db-7218-447d-b1a6-4534add69c8f] Pending
	I1124 13:15:55.056919    5361 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qnq5w" [112f4619-ab05-454a-bb09-7bce99c81b00] Pending
	I1124 13:15:55.056956    5361 system_pods.go:61] "storage-provisioner" [cc0f47a2-043c-4b50-bc22-c33673a0ea35] Pending
	I1124 13:15:55.056981    5361 system_pods.go:74] duration metric: took 21.23853ms to wait for pod list to return data ...
	I1124 13:15:55.057042    5361 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:15:55.081341    5361 default_sa.go:45] found service account: "default"
	I1124 13:15:55.081418    5361 default_sa.go:55] duration metric: took 24.351747ms for default service account to be created ...
	I1124 13:15:55.081442    5361 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:15:55.098846    5361 system_pods.go:86] 19 kube-system pods found
	I1124 13:15:55.098932    5361 system_pods.go:89] "coredns-66bc5c9577-hhndw" [65c642fe-4303-4dda-b00c-9892237a778c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:55.098956    5361 system_pods.go:89] "csi-hostpath-attacher-0" [71395414-4872-496e-9036-28fa99a9b658] Pending
	I1124 13:15:55.099016    5361 system_pods.go:89] "csi-hostpath-resizer-0" [279dab72-076d-46b9-90f4-1d69768633da] Pending
	I1124 13:15:55.099041    5361 system_pods.go:89] "csi-hostpathplugin-89nqp" [933c89cf-7b06-45b0-a9b4-be0b5f3b3bde] Pending
	I1124 13:15:55.099065    5361 system_pods.go:89] "etcd-addons-647907" [876e6603-49e6-44f6-9331-ed2a53b6657b] Running
	I1124 13:15:55.099100    5361 system_pods.go:89] "kindnet-cq7x5" [d7365392-f359-4f14-93e6-ae671834849d] Running
	I1124 13:15:55.099126    5361 system_pods.go:89] "kube-apiserver-addons-647907" [af27e5b5-2033-4c23-95d2-39ec0194646d] Running
	I1124 13:15:55.099146    5361 system_pods.go:89] "kube-controller-manager-addons-647907" [132a9fce-2239-4973-a9e3-3af445872d21] Running
	I1124 13:15:55.099185    5361 system_pods.go:89] "kube-ingress-dns-minikube" [8bade7cb-4229-4fda-9216-6adefd2a920f] Pending
	I1124 13:15:55.099208    5361 system_pods.go:89] "kube-proxy-n8mpw" [56957e05-4c86-4594-937e-b547b9ffdb86] Running
	I1124 13:15:55.099232    5361 system_pods.go:89] "kube-scheduler-addons-647907" [e8469bdd-2a1b-4d5a-932b-866e717756f2] Running
	I1124 13:15:55.099266    5361 system_pods.go:89] "metrics-server-85b7d694d7-xlsnf" [1ef721be-841d-4a36-949e-55c1b518e346] Pending
	I1124 13:15:55.099291    5361 system_pods.go:89] "nvidia-device-plugin-daemonset-dn469" [983203db-3631-4a06-80b8-418beda496e4] Pending
	I1124 13:15:55.099315    5361 system_pods.go:89] "registry-6b586f9694-2l7kt" [9c4c505b-2065-4059-a0a8-66ae39acca38] Pending
	I1124 13:15:55.099348    5361 system_pods.go:89] "registry-creds-764b6fb674-pf26d" [eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6] Pending
	I1124 13:15:55.099404    5361 system_pods.go:89] "registry-proxy-9hgsb" [fe9ffd77-b342-49e0-b2bb-e1634ce62247] Pending
	I1124 13:15:55.099438    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49v4w" [dc47a8db-7218-447d-b1a6-4534add69c8f] Pending
	I1124 13:15:55.099459    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qnq5w" [112f4619-ab05-454a-bb09-7bce99c81b00] Pending
	I1124 13:15:55.099480    5361 system_pods.go:89] "storage-provisioner" [cc0f47a2-043c-4b50-bc22-c33673a0ea35] Pending
	I1124 13:15:55.099522    5361 retry.go:31] will retry after 306.124483ms: missing components: kube-dns
	I1124 13:15:55.155384    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:55.400825    5361 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 13:15:55.400896    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:55.425835    5361 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 13:15:55.425903    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:55.426201    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:55.442348    5361 system_pods.go:86] 19 kube-system pods found
	I1124 13:15:55.442434    5361 system_pods.go:89] "coredns-66bc5c9577-hhndw" [65c642fe-4303-4dda-b00c-9892237a778c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:55.442457    5361 system_pods.go:89] "csi-hostpath-attacher-0" [71395414-4872-496e-9036-28fa99a9b658] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:55.442499    5361 system_pods.go:89] "csi-hostpath-resizer-0" [279dab72-076d-46b9-90f4-1d69768633da] Pending
	I1124 13:15:55.442524    5361 system_pods.go:89] "csi-hostpathplugin-89nqp" [933c89cf-7b06-45b0-a9b4-be0b5f3b3bde] Pending
	I1124 13:15:55.442546    5361 system_pods.go:89] "etcd-addons-647907" [876e6603-49e6-44f6-9331-ed2a53b6657b] Running
	I1124 13:15:55.442583    5361 system_pods.go:89] "kindnet-cq7x5" [d7365392-f359-4f14-93e6-ae671834849d] Running
	I1124 13:15:55.442607    5361 system_pods.go:89] "kube-apiserver-addons-647907" [af27e5b5-2033-4c23-95d2-39ec0194646d] Running
	I1124 13:15:55.442626    5361 system_pods.go:89] "kube-controller-manager-addons-647907" [132a9fce-2239-4973-a9e3-3af445872d21] Running
	I1124 13:15:55.442671    5361 system_pods.go:89] "kube-ingress-dns-minikube" [8bade7cb-4229-4fda-9216-6adefd2a920f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:55.442695    5361 system_pods.go:89] "kube-proxy-n8mpw" [56957e05-4c86-4594-937e-b547b9ffdb86] Running
	I1124 13:15:55.442721    5361 system_pods.go:89] "kube-scheduler-addons-647907" [e8469bdd-2a1b-4d5a-932b-866e717756f2] Running
	I1124 13:15:55.442755    5361 system_pods.go:89] "metrics-server-85b7d694d7-xlsnf" [1ef721be-841d-4a36-949e-55c1b518e346] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:55.442779    5361 system_pods.go:89] "nvidia-device-plugin-daemonset-dn469" [983203db-3631-4a06-80b8-418beda496e4] Pending
	I1124 13:15:55.442803    5361 system_pods.go:89] "registry-6b586f9694-2l7kt" [9c4c505b-2065-4059-a0a8-66ae39acca38] Pending
	I1124 13:15:55.442842    5361 system_pods.go:89] "registry-creds-764b6fb674-pf26d" [eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:55.442869    5361 system_pods.go:89] "registry-proxy-9hgsb" [fe9ffd77-b342-49e0-b2bb-e1634ce62247] Pending
	I1124 13:15:55.442893    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49v4w" [dc47a8db-7218-447d-b1a6-4534add69c8f] Pending
	I1124 13:15:55.442931    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qnq5w" [112f4619-ab05-454a-bb09-7bce99c81b00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:55.442955    5361 system_pods.go:89] "storage-provisioner" [cc0f47a2-043c-4b50-bc22-c33673a0ea35] Pending
	I1124 13:15:55.442988    5361 retry.go:31] will retry after 249.933697ms: missing components: kube-dns
	I1124 13:15:55.639893    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:55.705965    5361 system_pods.go:86] 19 kube-system pods found
	I1124 13:15:55.706054    5361 system_pods.go:89] "coredns-66bc5c9577-hhndw" [65c642fe-4303-4dda-b00c-9892237a778c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:55.706081    5361 system_pods.go:89] "csi-hostpath-attacher-0" [71395414-4872-496e-9036-28fa99a9b658] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:55.706121    5361 system_pods.go:89] "csi-hostpath-resizer-0" [279dab72-076d-46b9-90f4-1d69768633da] Pending
	I1124 13:15:55.706149    5361 system_pods.go:89] "csi-hostpathplugin-89nqp" [933c89cf-7b06-45b0-a9b4-be0b5f3b3bde] Pending
	I1124 13:15:55.706174    5361 system_pods.go:89] "etcd-addons-647907" [876e6603-49e6-44f6-9331-ed2a53b6657b] Running
	I1124 13:15:55.706214    5361 system_pods.go:89] "kindnet-cq7x5" [d7365392-f359-4f14-93e6-ae671834849d] Running
	I1124 13:15:55.706241    5361 system_pods.go:89] "kube-apiserver-addons-647907" [af27e5b5-2033-4c23-95d2-39ec0194646d] Running
	I1124 13:15:55.706265    5361 system_pods.go:89] "kube-controller-manager-addons-647907" [132a9fce-2239-4973-a9e3-3af445872d21] Running
	I1124 13:15:55.706304    5361 system_pods.go:89] "kube-ingress-dns-minikube" [8bade7cb-4229-4fda-9216-6adefd2a920f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:55.706330    5361 system_pods.go:89] "kube-proxy-n8mpw" [56957e05-4c86-4594-937e-b547b9ffdb86] Running
	I1124 13:15:55.706355    5361 system_pods.go:89] "kube-scheduler-addons-647907" [e8469bdd-2a1b-4d5a-932b-866e717756f2] Running
	I1124 13:15:55.706388    5361 system_pods.go:89] "metrics-server-85b7d694d7-xlsnf" [1ef721be-841d-4a36-949e-55c1b518e346] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:55.706415    5361 system_pods.go:89] "nvidia-device-plugin-daemonset-dn469" [983203db-3631-4a06-80b8-418beda496e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 13:15:55.706438    5361 system_pods.go:89] "registry-6b586f9694-2l7kt" [9c4c505b-2065-4059-a0a8-66ae39acca38] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:55.706477    5361 system_pods.go:89] "registry-creds-764b6fb674-pf26d" [eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:55.706502    5361 system_pods.go:89] "registry-proxy-9hgsb" [fe9ffd77-b342-49e0-b2bb-e1634ce62247] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 13:15:55.706526    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49v4w" [dc47a8db-7218-447d-b1a6-4534add69c8f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:55.706565    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qnq5w" [112f4619-ab05-454a-bb09-7bce99c81b00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:55.706591    5361 system_pods.go:89] "storage-provisioner" [cc0f47a2-043c-4b50-bc22-c33673a0ea35] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:15:55.706637    5361 retry.go:31] will retry after 365.091725ms: missing components: kube-dns
	I1124 13:15:55.886783    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:55.989028    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:55.989349    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:56.097669    5361 system_pods.go:86] 19 kube-system pods found
	I1124 13:15:56.097761    5361 system_pods.go:89] "coredns-66bc5c9577-hhndw" [65c642fe-4303-4dda-b00c-9892237a778c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:56.097786    5361 system_pods.go:89] "csi-hostpath-attacher-0" [71395414-4872-496e-9036-28fa99a9b658] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:56.097827    5361 system_pods.go:89] "csi-hostpath-resizer-0" [279dab72-076d-46b9-90f4-1d69768633da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 13:15:56.097854    5361 system_pods.go:89] "csi-hostpathplugin-89nqp" [933c89cf-7b06-45b0-a9b4-be0b5f3b3bde] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 13:15:56.097878    5361 system_pods.go:89] "etcd-addons-647907" [876e6603-49e6-44f6-9331-ed2a53b6657b] Running
	I1124 13:15:56.097913    5361 system_pods.go:89] "kindnet-cq7x5" [d7365392-f359-4f14-93e6-ae671834849d] Running
	I1124 13:15:56.097936    5361 system_pods.go:89] "kube-apiserver-addons-647907" [af27e5b5-2033-4c23-95d2-39ec0194646d] Running
	I1124 13:15:56.097956    5361 system_pods.go:89] "kube-controller-manager-addons-647907" [132a9fce-2239-4973-a9e3-3af445872d21] Running
	I1124 13:15:56.097994    5361 system_pods.go:89] "kube-ingress-dns-minikube" [8bade7cb-4229-4fda-9216-6adefd2a920f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:56.098016    5361 system_pods.go:89] "kube-proxy-n8mpw" [56957e05-4c86-4594-937e-b547b9ffdb86] Running
	I1124 13:15:56.098039    5361 system_pods.go:89] "kube-scheduler-addons-647907" [e8469bdd-2a1b-4d5a-932b-866e717756f2] Running
	I1124 13:15:56.098077    5361 system_pods.go:89] "metrics-server-85b7d694d7-xlsnf" [1ef721be-841d-4a36-949e-55c1b518e346] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:56.098102    5361 system_pods.go:89] "nvidia-device-plugin-daemonset-dn469" [983203db-3631-4a06-80b8-418beda496e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 13:15:56.098125    5361 system_pods.go:89] "registry-6b586f9694-2l7kt" [9c4c505b-2065-4059-a0a8-66ae39acca38] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:56.098163    5361 system_pods.go:89] "registry-creds-764b6fb674-pf26d" [eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:56.098190    5361 system_pods.go:89] "registry-proxy-9hgsb" [fe9ffd77-b342-49e0-b2bb-e1634ce62247] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 13:15:56.098216    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49v4w" [dc47a8db-7218-447d-b1a6-4534add69c8f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:56.098251    5361 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qnq5w" [112f4619-ab05-454a-bb09-7bce99c81b00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:56.098276    5361 system_pods.go:89] "storage-provisioner" [cc0f47a2-043c-4b50-bc22-c33673a0ea35] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:15:56.098303    5361 system_pods.go:126] duration metric: took 1.016840894s to wait for k8s-apps to be running ...
	I1124 13:15:56.098342    5361 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:15:56.098429    5361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:15:56.188224    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:56.245554    5361 system_svc.go:56] duration metric: took 147.204534ms WaitForService to wait for kubelet
	I1124 13:15:56.245630    5361 kubeadm.go:587] duration metric: took 42.192603294s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:15:56.245661    5361 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:15:56.248950    5361 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 13:15:56.249046    5361 node_conditions.go:123] node cpu capacity is 2
	I1124 13:15:56.249074    5361 node_conditions.go:105] duration metric: took 3.389732ms to run NodePressure ...
	I1124 13:15:56.249119    5361 start.go:242] waiting for startup goroutines ...
	I1124 13:15:56.385391    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:56.406985    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:56.409080    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:56.634857    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:56.885225    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:56.908233    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:56.908957    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:57.136065    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:57.385744    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:57.409979    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:57.410120    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:57.635697    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:57.884896    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:57.909186    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:57.909381    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:58.135299    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:58.386248    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:58.408192    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:58.408987    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:58.634884    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:58.885886    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:58.909681    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:58.910231    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:59.137379    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:59.385182    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:59.409179    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:59.410085    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:59.639349    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:59.887544    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:59.920239    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:59.920492    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:00.141501    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:00.398635    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:00.423558    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:00.438617    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:00.638807    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:00.885261    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:00.913496    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:00.913754    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:01.137921    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:01.385792    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:01.408795    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:01.409195    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:01.652803    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:01.885805    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:01.908750    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:01.909385    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:02.135181    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:02.386152    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:02.408520    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:02.408805    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:02.635526    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:02.886876    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:02.909248    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:02.911486    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:03.135525    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:03.384984    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:03.408593    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:03.409073    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:03.635326    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:03.885402    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:03.908598    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:03.909465    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:04.136020    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:04.385786    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:04.409413    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:04.409535    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:04.635775    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:04.885394    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:04.908436    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:04.908656    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:05.136096    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:05.386229    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:05.409809    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:05.410492    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:05.635855    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:05.885046    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:05.915169    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:05.916124    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:06.134796    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:06.385751    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:06.407996    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:06.408182    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:06.635480    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:06.884919    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:06.907967    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:06.908180    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:07.135771    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:07.386227    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:07.408513    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:07.409192    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:07.635560    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:07.885462    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:07.907456    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:07.908017    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:08.135096    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:08.385013    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:08.408924    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:08.409023    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:08.634702    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:08.886529    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:08.908892    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:08.909014    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:09.135284    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:09.386124    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:09.407932    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:09.409321    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:09.635401    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:09.885195    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:09.907259    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:09.909722    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:10.136068    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:10.388438    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:10.487061    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:10.488177    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:10.634891    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:10.885707    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:10.986291    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:10.986452    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:11.135344    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:11.385800    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:11.408819    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:11.408961    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:11.635874    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:11.885851    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:11.908105    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:11.910148    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:12.135285    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:12.386052    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:12.486881    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:12.487247    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:12.635056    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:12.885350    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:12.913340    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:12.913426    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:13.135787    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:13.385561    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:13.408794    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:13.409696    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:13.635275    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:13.886103    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:13.912223    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:13.912591    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:14.135334    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:14.384773    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:14.418958    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:14.420895    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:14.635563    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:14.888138    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:14.920965    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:14.926014    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:15.135383    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:15.385482    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:15.408988    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:15.409608    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:15.638179    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:15.886080    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:15.910829    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:15.911956    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:16.134963    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:16.385653    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:16.416419    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:16.416810    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:16.635815    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:16.885728    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:16.908157    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:16.910448    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:17.136162    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:17.385909    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:17.409068    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:17.409447    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:17.635507    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:17.885707    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:17.916787    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:17.917074    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:18.135323    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:18.386282    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:18.408331    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:18.409868    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:18.635228    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:18.887281    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:18.914037    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:18.922433    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:19.135754    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:19.385527    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:19.409711    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:19.409784    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:19.635459    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:19.885334    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:19.910170    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:19.910575    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:20.136565    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:20.386255    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:20.409690    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:20.410564    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:20.640717    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:20.885377    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:20.908677    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:20.910333    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:21.135723    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:21.385845    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:21.409850    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:21.410224    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:21.635892    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:21.887867    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:21.910309    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:21.910634    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:22.137888    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:22.394894    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:22.409459    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:22.409757    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:22.635171    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:22.891036    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:22.909296    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:22.909674    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:23.134692    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:23.386508    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:23.487841    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:23.488021    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:23.635278    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:23.886938    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:23.910432    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:23.910618    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:24.134572    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:24.385593    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:24.409335    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:24.409717    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:24.636355    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:24.887320    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:24.910267    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:24.910359    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:25.135246    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:25.385601    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:25.409233    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:25.409576    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:25.635765    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:25.885053    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:25.911875    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:25.912795    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:26.135043    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:26.385596    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:26.407411    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:26.408006    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:26.635273    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:26.886392    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:26.910225    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:26.910603    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:27.135996    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:27.385860    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:27.408356    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:27.409276    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:27.635943    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:27.887806    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:27.911936    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:27.912436    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:28.136670    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:28.385850    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:28.408253    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:28.408341    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:28.636199    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:28.885932    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:28.908680    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:28.910178    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:29.135814    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:29.385106    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:29.409497    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:29.410369    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:29.635634    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:29.884947    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:29.914586    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:29.914694    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:30.139176    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:30.388263    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:30.408324    5361 kapi.go:107] duration metric: took 1m10.504422616s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 13:16:30.408508    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:30.635478    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:30.888421    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:30.908410    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:31.135528    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:31.385739    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:31.408205    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:31.635316    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:31.885232    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:31.909169    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:32.135183    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:32.386139    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:32.408119    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:32.635869    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:32.885058    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:32.909910    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:33.135348    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:33.385579    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:33.409059    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:33.634765    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:33.885519    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:33.908910    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:34.135232    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:34.385882    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:34.413298    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:34.635720    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:34.884850    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:34.907579    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:35.135512    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:35.385073    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:35.408028    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:35.636059    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:35.885702    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:35.913016    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:36.135065    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:36.385553    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:36.409496    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:36.635876    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:36.887400    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:36.909294    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:37.135520    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:37.385175    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:37.408018    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:37.635310    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:37.891454    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:37.910788    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:38.135490    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:38.393444    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:38.420822    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:38.636390    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:38.885694    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:38.908533    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:39.134872    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:39.388151    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:39.408203    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:39.635795    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:39.885324    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:39.913152    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:40.135564    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:40.395841    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:40.417441    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:40.637930    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:40.889633    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:40.912396    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:41.135091    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:41.385404    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:41.408107    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:41.634710    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:41.885462    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:41.909586    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:42.136499    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:42.385447    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:42.408655    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:42.635963    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:42.890227    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:42.909105    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:43.135554    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:43.386119    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:43.408418    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:43.637189    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:43.886115    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:43.909560    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:44.135528    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:44.385245    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:44.409093    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:44.636155    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:44.886169    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:44.912840    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:45.135472    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:45.386114    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:45.408208    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:45.635533    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:45.884846    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:45.907896    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:46.136003    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:46.385817    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:46.409614    5361 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:46.641770    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:46.887693    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:46.987923    5361 kapi.go:107] duration metric: took 1m27.083155964s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 13:16:47.134625    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:47.385138    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:47.635613    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:47.890609    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:48.208454    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:48.386570    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:48.636326    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:48.885906    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:49.135491    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:49.385464    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:49.636053    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:49.884785    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:50.139035    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:50.385697    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:50.634945    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:50.885891    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:51.136023    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:51.386156    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:51.635778    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:51.885657    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:52.134684    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:52.384886    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:52.635302    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:52.886270    5361 kapi.go:107] duration metric: took 1m32.504694513s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 13:16:53.139108    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:53.634909    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:54.138645    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:54.635400    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:55.135446    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:55.634975    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:56.135128    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:56.635306    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:57.135450    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:57.634775    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:58.135327    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:58.635176    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:59.134895    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:59.635465    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:00.135343    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:00.634961    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:01.136563    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:01.635852    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:02.138307    5361 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:02.637593    5361 kapi.go:107] duration metric: took 1m39.005772808s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 13:17:02.638960    5361 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-647907 cluster.
	I1124 13:17:02.640342    5361 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 13:17:02.641588    5361 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 13:17:02.642953    5361 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, ingress-dns, cloud-spanner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1124 13:17:02.644419    5361 addons.go:530] duration metric: took 1m48.590954305s for enable addons: enabled=[default-storageclass storage-provisioner ingress-dns cloud-spanner registry-creds nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1124 13:17:02.644486    5361 start.go:247] waiting for cluster config update ...
	I1124 13:17:02.644515    5361 start.go:256] writing updated cluster config ...
	I1124 13:17:02.644827    5361 ssh_runner.go:195] Run: rm -f paused
	I1124 13:17:02.650565    5361 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:17:02.654483    5361 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hhndw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.660920    5361 pod_ready.go:94] pod "coredns-66bc5c9577-hhndw" is "Ready"
	I1124 13:17:02.660953    5361 pod_ready.go:86] duration metric: took 6.435155ms for pod "coredns-66bc5c9577-hhndw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.663832    5361 pod_ready.go:83] waiting for pod "etcd-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.669824    5361 pod_ready.go:94] pod "etcd-addons-647907" is "Ready"
	I1124 13:17:02.669859    5361 pod_ready.go:86] duration metric: took 5.99516ms for pod "etcd-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.672774    5361 pod_ready.go:83] waiting for pod "kube-apiserver-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.678187    5361 pod_ready.go:94] pod "kube-apiserver-addons-647907" is "Ready"
	I1124 13:17:02.678262    5361 pod_ready.go:86] duration metric: took 5.458196ms for pod "kube-apiserver-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.681172    5361 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:03.055754    5361 pod_ready.go:94] pod "kube-controller-manager-addons-647907" is "Ready"
	I1124 13:17:03.055779    5361 pod_ready.go:86] duration metric: took 374.579075ms for pod "kube-controller-manager-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:03.256655    5361 pod_ready.go:83] waiting for pod "kube-proxy-n8mpw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:03.655143    5361 pod_ready.go:94] pod "kube-proxy-n8mpw" is "Ready"
	I1124 13:17:03.655217    5361 pod_ready.go:86] duration metric: took 398.510326ms for pod "kube-proxy-n8mpw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:03.856295    5361 pod_ready.go:83] waiting for pod "kube-scheduler-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:04.255020    5361 pod_ready.go:94] pod "kube-scheduler-addons-647907" is "Ready"
	I1124 13:17:04.255048    5361 pod_ready.go:86] duration metric: took 398.714947ms for pod "kube-scheduler-addons-647907" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:04.255062    5361 pod_ready.go:40] duration metric: took 1.604466409s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:17:04.728598    5361 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 13:17:04.731032    5361 out.go:179] * Done! kubectl is now configured to use "addons-647907" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 13:17:06 addons-647907 crio[833]: time="2025-11-24T13:17:06.112974031Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cd7fa17dc6ef2e7412d310e84dfc34e0e655af0c2160b9af566855406fe45238 UID:4965dea7-ab1e-4641-94c0-23710f8285e0 NetNS:/var/run/netns/26e26b4d-9cd8-4811-abde-4485aaec5eeb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000d4b70}] Aliases:map[]}"
	Nov 24 13:17:06 addons-647907 crio[833]: time="2025-11-24T13:17:06.113143141Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 13:17:06 addons-647907 crio[833]: time="2025-11-24T13:17:06.117362384Z" level=info msg="Ran pod sandbox cd7fa17dc6ef2e7412d310e84dfc34e0e655af0c2160b9af566855406fe45238 with infra container: default/busybox/POD" id=c064589e-cae9-40f3-8db8-dd5f24df9ae4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:17:06 addons-647907 crio[833]: time="2025-11-24T13:17:06.119309215Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=556b16fb-5e8f-4ddf-8b4d-739225a4b3c8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:17:06 addons-647907 crio[833]: time="2025-11-24T13:17:06.120266786Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=556b16fb-5e8f-4ddf-8b4d-739225a4b3c8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:17:06 addons-647907 crio[833]: time="2025-11-24T13:17:06.120344095Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=556b16fb-5e8f-4ddf-8b4d-739225a4b3c8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:17:06 addons-647907 crio[833]: time="2025-11-24T13:17:06.123779562Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c65367e4-f633-442b-a788-3583ada622ca name=/runtime.v1.ImageService/PullImage
	Nov 24 13:17:06 addons-647907 crio[833]: time="2025-11-24T13:17:06.127630589Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.110958487Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=c65367e4-f633-442b-a788-3583ada622ca name=/runtime.v1.ImageService/PullImage
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.111903685Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ef943013-9fef-41e0-a3ba-c4c96d6a54aa name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.114090673Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bddf0d81-8979-4a0e-8425-0cd4a0eaa893 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.119850738Z" level=info msg="Creating container: default/busybox/busybox" id=d52d818e-c623-4c59-a205-87bffccaff7d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.119999153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.126654815Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.127194873Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.146415825Z" level=info msg="Created container 76fb47c5dcc709fbb839b7ed0da45df735f018b06142da1b31e2751f25fbcb8b: default/busybox/busybox" id=d52d818e-c623-4c59-a205-87bffccaff7d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.147660595Z" level=info msg="Starting container: 76fb47c5dcc709fbb839b7ed0da45df735f018b06142da1b31e2751f25fbcb8b" id=f29c9ea9-d3c9-44da-ac40-eda077f1581d name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.152358841Z" level=info msg="Started container" PID=4941 containerID=76fb47c5dcc709fbb839b7ed0da45df735f018b06142da1b31e2751f25fbcb8b description=default/busybox/busybox id=f29c9ea9-d3c9-44da-ac40-eda077f1581d name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd7fa17dc6ef2e7412d310e84dfc34e0e655af0c2160b9af566855406fe45238
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.841496725Z" level=info msg="Removing container: 5b62e14ae59723df7f0e2dba5626fb54adf9ba67a0f5347169d67aab27e0611d" id=e300cc03-6ed8-4a9e-a6fd-cfc56e51b0ea name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.845122913Z" level=info msg="Error loading conmon cgroup of container 5b62e14ae59723df7f0e2dba5626fb54adf9ba67a0f5347169d67aab27e0611d: cgroup deleted" id=e300cc03-6ed8-4a9e-a6fd-cfc56e51b0ea name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.850719538Z" level=info msg="Removed container 5b62e14ae59723df7f0e2dba5626fb54adf9ba67a0f5347169d67aab27e0611d: gcp-auth/gcp-auth-certs-create-9p2k5/create" id=e300cc03-6ed8-4a9e-a6fd-cfc56e51b0ea name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.859472834Z" level=info msg="Stopping pod sandbox: 8752e240ed653e1868dffb71cb5c0774c0810d950033ca63237b90dcbc069f3a" id=a357e127-2451-4c9f-85b5-3b992f19db48 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.859540305Z" level=info msg="Stopped pod sandbox (already stopped): 8752e240ed653e1868dffb71cb5c0774c0810d950033ca63237b90dcbc069f3a" id=a357e127-2451-4c9f-85b5-3b992f19db48 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.859970364Z" level=info msg="Removing pod sandbox: 8752e240ed653e1868dffb71cb5c0774c0810d950033ca63237b90dcbc069f3a" id=b3f2e48c-3f0c-4770-9e8f-0e9fcb369ad9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 13:17:08 addons-647907 crio[833]: time="2025-11-24T13:17:08.8653411Z" level=info msg="Removed pod sandbox: 8752e240ed653e1868dffb71cb5c0774c0810d950033ca63237b90dcbc069f3a" id=b3f2e48c-3f0c-4770-9e8f-0e9fcb369ad9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	76fb47c5dcc70       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          7 seconds ago        Running             busybox                                  0                   cd7fa17dc6ef2       busybox                                    default
	503deb47d65b1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 13 seconds ago       Running             gcp-auth                                 0                   4112ee0d386f5       gcp-auth-78565c9fb4-kn6st                  gcp-auth
	f5728fafdcfd6       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          24 seconds ago       Running             csi-snapshotter                          0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	8ae9f6e9c70db       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          26 seconds ago       Running             csi-provisioner                          0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	5b3fa06bb0192       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            27 seconds ago       Running             liveness-probe                           0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	973aff8a30a4f       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           28 seconds ago       Running             hostpath                                 0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	7f05ea466739f       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             30 seconds ago       Running             controller                               0                   daf0032de8fde       ingress-nginx-controller-6c8bf45fb-m8nwb   ingress-nginx
	359ce6fb2238a       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             32 seconds ago       Exited              patch                                    3                   c2e77902839ae       gcp-auth-certs-patch-pbxl7                 gcp-auth
	3a0f4619a26f4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            37 seconds ago       Running             gadget                                   0                   6f5ebc6fa0563       gadget-6cbf2                               gadget
	6b55d5b71eba6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                40 seconds ago       Running             node-driver-registrar                    0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	7e0552d507b6a       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     41 seconds ago       Running             nvidia-device-plugin-ctr                 0                   fdef0c06db5cb       nvidia-device-plugin-daemonset-dn469       kube-system
	5b5f53029c3ca       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   46 seconds ago       Exited              patch                                    0                   edfe382f748a1       ingress-nginx-admission-patch-ckrmb        ingress-nginx
	1868812a69065       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              46 seconds ago       Running             registry-proxy                           0                   3947e21735713       registry-proxy-9hgsb                       kube-system
	ee0e8bc0faa14       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              50 seconds ago       Running             yakd                                     0                   11bb54e340385       yakd-dashboard-5ff678cb9-rt8kg             yakd-dashboard
	e66116781aa16       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             53 seconds ago       Running             csi-attacher                             0                   f02b00b45971d       csi-hostpath-attacher-0                    kube-system
	2d0baf6e27693       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               55 seconds ago       Running             minikube-ingress-dns                     0                   55418a29a9ab7       kube-ingress-dns-minikube                  kube-system
	13013bc5bc293       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   2485fe917df5d       ingress-nginx-admission-create-zgt7r       ingress-nginx
	0ba479f65d38f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   177375b09ca3b       snapshot-controller-7d9fbc56b8-49v4w       kube-system
	e67c8d2fe588f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   7d905a08f881d       csi-hostpathplugin-89nqp                   kube-system
	eb8a3b03da33f       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   e2becbd251a6a       csi-hostpath-resizer-0                     kube-system
	7ee4d3e3512b4       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   180af60765641       registry-6b586f9694-2l7kt                  kube-system
	36b6ae96ef25a       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   7739a02243c3f       local-path-provisioner-648f6765c9-dn62k    local-path-storage
	93bbb33eaf24e       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   4241c1d9a8ea8       cloud-spanner-emulator-5bdddb765-s88qb     default
	06594980d0770       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   56d7024f34a8f       snapshot-controller-7d9fbc56b8-qnq5w       kube-system
	586f58fd71be7       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   583f0fbfe330f       metrics-server-85b7d694d7-xlsnf            kube-system
	8a76716af61b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   7a35f43799697       storage-provisioner                        kube-system
	612efd74b90ce       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   88fbe6dca4fce       coredns-66bc5c9577-hhndw                   kube-system
	c4588bdbb8946       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   35fa956a8e883       kube-proxy-n8mpw                           kube-system
	646560c4253bb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   4af34a45b733f       kindnet-cq7x5                              kube-system
	47bfa25635dec       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   5e816d39f4177       kube-controller-manager-addons-647907      kube-system
	9bbf65dfab06c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   683d1e2e165aa       etcd-addons-647907                         kube-system
	9eb49b73252f4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   073bcb08ee3c0       kube-scheduler-addons-647907               kube-system
	448d21b7cb222       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   cb92b885a5606       kube-apiserver-addons-647907               kube-system
	
	
	==> coredns [612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46] <==
	[INFO] 10.244.0.10:34902 - 4243 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000187267s
	[INFO] 10.244.0.10:34902 - 36733 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002888517s
	[INFO] 10.244.0.10:34902 - 40936 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002935015s
	[INFO] 10.244.0.10:34902 - 13888 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000138331s
	[INFO] 10.244.0.10:34902 - 47191 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000185831s
	[INFO] 10.244.0.10:52476 - 40277 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000187538s
	[INFO] 10.244.0.10:52476 - 40499 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000303166s
	[INFO] 10.244.0.10:59432 - 58246 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112149s
	[INFO] 10.244.0.10:59432 - 58024 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081921s
	[INFO] 10.244.0.10:53141 - 15818 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080526s
	[INFO] 10.244.0.10:53141 - 15389 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000290226s
	[INFO] 10.244.0.10:47645 - 577 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001614537s
	[INFO] 10.244.0.10:47645 - 1014 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001727539s
	[INFO] 10.244.0.10:49302 - 64609 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146644s
	[INFO] 10.244.0.10:49302 - 65021 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090988s
	[INFO] 10.244.0.21:51671 - 8025 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000199994s
	[INFO] 10.244.0.21:41638 - 47622 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000255765s
	[INFO] 10.244.0.21:37056 - 37264 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166524s
	[INFO] 10.244.0.21:60926 - 48178 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000215083s
	[INFO] 10.244.0.21:50195 - 15223 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000282029s
	[INFO] 10.244.0.21:38673 - 7834 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000235284s
	[INFO] 10.244.0.21:56838 - 60541 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003339844s
	[INFO] 10.244.0.21:46889 - 6244 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003542266s
	[INFO] 10.244.0.21:34726 - 38234 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001448102s
	[INFO] 10.244.0.21:37632 - 7581 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002997022s
	
	
	==> describe nodes <==
	Name:               addons-647907
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-647907
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=addons-647907
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_15_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-647907
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-647907"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:15:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-647907
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:17:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:17:11 +0000   Mon, 24 Nov 2025 13:15:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:17:11 +0000   Mon, 24 Nov 2025 13:15:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:17:11 +0000   Mon, 24 Nov 2025 13:15:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:17:11 +0000   Mon, 24 Nov 2025 13:15:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-647907
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                75c33a32-5988-45d0-af8d-ed87a64979b7
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-s88qb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  gadget                      gadget-6cbf2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  gcp-auth                    gcp-auth-78565c9fb4-kn6st                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-m8nwb    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         117s
	  kube-system                 coredns-66bc5c9577-hhndw                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 csi-hostpathplugin-89nqp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 etcd-addons-647907                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m9s
	  kube-system                 kindnet-cq7x5                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-addons-647907                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-addons-647907       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-n8mpw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-addons-647907                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 metrics-server-85b7d694d7-xlsnf             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         119s
	  kube-system                 nvidia-device-plugin-daemonset-dn469        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 registry-6b586f9694-2l7kt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 registry-creds-764b6fb674-pf26d             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 registry-proxy-9hgsb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 snapshot-controller-7d9fbc56b8-49v4w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 snapshot-controller-7d9fbc56b8-qnq5w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  local-path-storage          local-path-provisioner-648f6765c9-dn62k     0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-rt8kg              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 2m1s  kube-proxy       
	  Normal   Starting                 2m8s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m8s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m7s  kubelet          Node addons-647907 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m7s  kubelet          Node addons-647907 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s  kubelet          Node addons-647907 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m4s  node-controller  Node addons-647907 event: Registered Node addons-647907 in Controller
	  Normal   NodeReady                82s   kubelet          Node addons-647907 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015884] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504458] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033874] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.788873] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.144374] kauditd_printk_skb: 36 callbacks suppressed
	[Nov24 13:13] kauditd_printk_skb: 5 callbacks suppressed
	[Nov24 13:15] overlayfs: idmapped layers are currently not supported
	[  +0.074288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382] <==
	{"level":"warn","ts":"2025-11-24T13:15:04.405128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.424063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.446171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.473700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.499597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.531456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.558560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.584860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.617359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.689799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.724105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.753274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.778455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.815676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.867509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.874366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.909601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:04.948414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:05.083560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:20.664303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:20.682619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:42.916932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:42.931466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:42.975420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:15:42.985102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35926","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [503deb47d65b159e876f06a34337930d9e17d7917403b7ae7aa1e094ee40e4e2] <==
	2025/11/24 13:17:02 GCP Auth Webhook started!
	2025/11/24 13:17:05 Ready to marshal response ...
	2025/11/24 13:17:05 Ready to write response ...
	2025/11/24 13:17:05 Ready to marshal response ...
	2025/11/24 13:17:05 Ready to write response ...
	2025/11/24 13:17:05 Ready to marshal response ...
	2025/11/24 13:17:05 Ready to write response ...
	
	
	==> kernel <==
	 13:17:16 up 59 min,  0 user,  load average: 2.24, 1.29, 0.53
	Linux addons-647907 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa] <==
	E1124 13:15:44.730373       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 13:15:44.731174       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 13:15:44.736800       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1124 13:15:46.331202       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:15:46.331235       1 metrics.go:72] Registering metrics
	I1124 13:15:46.331315       1 controller.go:711] "Syncing nftables rules"
	E1124 13:15:46.331561       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1124 13:15:54.736256       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:15:54.736314       1 main.go:301] handling current node
	I1124 13:16:04.731446       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:16:04.731481       1 main.go:301] handling current node
	I1124 13:16:14.730110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:16:14.730141       1 main.go:301] handling current node
	I1124 13:16:24.730230       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:16:24.730266       1 main.go:301] handling current node
	I1124 13:16:34.729874       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:16:34.729920       1 main.go:301] handling current node
	I1124 13:16:44.729424       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:16:44.729465       1 main.go:301] handling current node
	I1124 13:16:54.730150       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:16:54.730185       1 main.go:301] handling current node
	I1124 13:17:04.729809       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:17:04.729847       1 main.go:301] handling current node
	I1124 13:17:14.731509       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:17:14.731537       1 main.go:301] handling current node
	
	
	==> kube-apiserver [448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d] <==
	W1124 13:15:20.655289       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:15:20.673561       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1124 13:15:23.512460       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.100.215.71"}
	W1124 13:15:42.916955       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:15:42.932063       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:15:42.962269       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:15:42.982702       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:15:54.925170       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.215.71:443: connect: connection refused
	E1124 13:15:54.925215       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.215.71:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:54.925623       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.215.71:443: connect: connection refused
	E1124 13:15:54.925652       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.215.71:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:54.984073       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.215.71:443: connect: connection refused
	E1124 13:15:54.984837       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.215.71:443: connect: connection refused" logger="UnhandledError"
	E1124 13:16:01.567187       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.27.120:443: connect: connection refused" logger="UnhandledError"
	W1124 13:16:01.567542       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 13:16:01.567621       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 13:16:01.569327       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.27.120:443: connect: connection refused" logger="UnhandledError"
	E1124 13:16:01.573587       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.27.120:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.27.120:443: connect: connection refused" logger="UnhandledError"
	I1124 13:16:01.684529       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 13:17:14.062475       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33654: use of closed network connection
	E1124 13:17:14.295103       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33680: use of closed network connection
	E1124 13:17:14.421535       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33694: use of closed network connection
	
	
	==> kube-controller-manager [47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66] <==
	I1124 13:15:12.947535       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:15:12.947575       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 13:15:12.947616       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 13:15:12.947644       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:15:12.947680       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 13:15:12.947707       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 13:15:12.947733       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:15:12.947774       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:15:12.947816       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 13:15:12.951857       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:15:12.951924       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 13:15:12.951958       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 13:15:12.951984       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 13:15:12.951989       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 13:15:12.951994       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 13:15:12.963510       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-647907" podCIDRs=["10.244.0.0/24"]
	E1124 13:15:17.928034       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1124 13:15:42.909528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 13:15:42.909680       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 13:15:42.909725       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 13:15:42.950425       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 13:15:42.955654       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 13:15:43.010060       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:15:43.055990       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:15:57.899017       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52] <==
	I1124 13:15:14.921607       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:15:15.037310       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:15:15.142568       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:15:15.142608       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 13:15:15.142670       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:15:15.218939       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:15:15.219025       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:15:15.232444       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:15:15.239329       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:15:15.239376       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:15:15.240853       1 config.go:200] "Starting service config controller"
	I1124 13:15:15.240871       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:15:15.240887       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:15:15.240891       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:15:15.240909       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:15:15.240913       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:15:15.241530       1 config.go:309] "Starting node config controller"
	I1124 13:15:15.241549       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:15:15.241561       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:15:15.341036       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:15:15.341071       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:15:15.341113       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167] <==
	E1124 13:15:06.116375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:15:06.118961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:15:06.119036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 13:15:06.119106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:15:06.119158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:15:06.119194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:15:06.119249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:15:06.119302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:15:06.119340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:15:06.119471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:15:06.119526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:15:06.119564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:15:06.955399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:15:06.987036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:15:06.988249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:15:07.122684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:15:07.173491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:15:07.182166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:15:07.194924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:15:07.250838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:15:07.284535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:15:07.286928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:15:07.323880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:15:07.356123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1124 13:15:10.392250       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:16:35 addons-647907 kubelet[1273]: I1124 13:16:35.413934    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dn469" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:16:38 addons-647907 kubelet[1273]: I1124 13:16:38.405482    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-dn469" podStartSLOduration=6.214540811 podStartE2EDuration="44.405460503s" podCreationTimestamp="2025-11-24 13:15:54 +0000 UTC" firstStartedPulling="2025-11-24 13:15:55.989926914 +0000 UTC m=+47.315913424" lastFinishedPulling="2025-11-24 13:16:34.180846515 +0000 UTC m=+85.506833116" observedRunningTime="2025-11-24 13:16:34.427166475 +0000 UTC m=+85.753152994" watchObservedRunningTime="2025-11-24 13:16:38.405460503 +0000 UTC m=+89.731447014"
	Nov 24 13:16:42 addons-647907 kubelet[1273]: I1124 13:16:42.842514    1273 scope.go:117] "RemoveContainer" containerID="9bc702d299d7e161909a7960b7c4d281970defbcb99f1dd8d81ce359a6ae8131"
	Nov 24 13:16:43 addons-647907 kubelet[1273]: I1124 13:16:43.468754    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-6cbf2" podStartSLOduration=69.216444381 podStartE2EDuration="1m24.468734661s" podCreationTimestamp="2025-11-24 13:15:19 +0000 UTC" firstStartedPulling="2025-11-24 13:16:23.389815097 +0000 UTC m=+74.715801608" lastFinishedPulling="2025-11-24 13:16:38.642105369 +0000 UTC m=+89.968091888" observedRunningTime="2025-11-24 13:16:39.479495435 +0000 UTC m=+90.805481970" watchObservedRunningTime="2025-11-24 13:16:43.468734661 +0000 UTC m=+94.794721172"
	Nov 24 13:16:44 addons-647907 kubelet[1273]: I1124 13:16:44.481342    1273 scope.go:117] "RemoveContainer" containerID="9bc702d299d7e161909a7960b7c4d281970defbcb99f1dd8d81ce359a6ae8131"
	Nov 24 13:16:45 addons-647907 kubelet[1273]: I1124 13:16:45.853916    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9xvf\" (UniqueName: \"kubernetes.io/projected/3e90e248-2efe-49b5-80fc-bfa936939316-kube-api-access-n9xvf\") pod \"3e90e248-2efe-49b5-80fc-bfa936939316\" (UID: \"3e90e248-2efe-49b5-80fc-bfa936939316\") "
	Nov 24 13:16:45 addons-647907 kubelet[1273]: I1124 13:16:45.861139    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e90e248-2efe-49b5-80fc-bfa936939316-kube-api-access-n9xvf" (OuterVolumeSpecName: "kube-api-access-n9xvf") pod "3e90e248-2efe-49b5-80fc-bfa936939316" (UID: "3e90e248-2efe-49b5-80fc-bfa936939316"). InnerVolumeSpecName "kube-api-access-n9xvf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 24 13:16:45 addons-647907 kubelet[1273]: I1124 13:16:45.955862    1273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n9xvf\" (UniqueName: \"kubernetes.io/projected/3e90e248-2efe-49b5-80fc-bfa936939316-kube-api-access-n9xvf\") on node \"addons-647907\" DevicePath \"\""
	Nov 24 13:16:46 addons-647907 kubelet[1273]: I1124 13:16:46.487830    1273 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2e77902839ae048dea4c63da03ff89c8270ba49434df1410809c562f2e834e9"
	Nov 24 13:16:46 addons-647907 kubelet[1273]: I1124 13:16:46.517065    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-m8nwb" podStartSLOduration=68.791305676 podStartE2EDuration="1m27.517047008s" podCreationTimestamp="2025-11-24 13:15:19 +0000 UTC" firstStartedPulling="2025-11-24 13:16:27.183701545 +0000 UTC m=+78.509688055" lastFinishedPulling="2025-11-24 13:16:45.909442876 +0000 UTC m=+97.235429387" observedRunningTime="2025-11-24 13:16:46.515820864 +0000 UTC m=+97.841807392" watchObservedRunningTime="2025-11-24 13:16:46.517047008 +0000 UTC m=+97.843033527"
	Nov 24 13:16:48 addons-647907 kubelet[1273]: I1124 13:16:48.017640    1273 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 24 13:16:48 addons-647907 kubelet[1273]: I1124 13:16:48.017711    1273 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 24 13:16:52 addons-647907 kubelet[1273]: I1124 13:16:52.553608    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-89nqp" podStartSLOduration=2.769944179 podStartE2EDuration="58.553589194s" podCreationTimestamp="2025-11-24 13:15:54 +0000 UTC" firstStartedPulling="2025-11-24 13:15:55.677140353 +0000 UTC m=+47.003126872" lastFinishedPulling="2025-11-24 13:16:51.460785368 +0000 UTC m=+102.786771887" observedRunningTime="2025-11-24 13:16:52.549864798 +0000 UTC m=+103.875851309" watchObservedRunningTime="2025-11-24 13:16:52.553589194 +0000 UTC m=+103.879575713"
	Nov 24 13:16:58 addons-647907 kubelet[1273]: E1124 13:16:58.867688    1273 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 24 13:16:58 addons-647907 kubelet[1273]: E1124 13:16:58.867770    1273 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6-gcr-creds podName:eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6 nodeName:}" failed. No retries permitted until 2025-11-24 13:18:02.867752734 +0000 UTC m=+174.193739245 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6-gcr-creds") pod "registry-creds-764b6fb674-pf26d" (UID: "eb7a3229-dc9f-4def-8eb7-2b7e2e8b7ad6") : secret "registry-creds-gcr" not found
	Nov 24 13:16:59 addons-647907 kubelet[1273]: W1124 13:16:59.289828    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3/crio-4112ee0d386f5a798f22ac52c9770dc0649c947d6f807712c38c6eac61f966d1 WatchSource:0}: Error finding container 4112ee0d386f5a798f22ac52c9770dc0649c947d6f807712c38c6eac61f966d1: Status 404 returned error can't find the container with id 4112ee0d386f5a798f22ac52c9770dc0649c947d6f807712c38c6eac61f966d1
	Nov 24 13:17:02 addons-647907 kubelet[1273]: I1124 13:17:02.844526    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6e0012b-cc88-4630-af82-0e0c5eb8d86c" path="/var/lib/kubelet/pods/b6e0012b-cc88-4630-af82-0e0c5eb8d86c/volumes"
	Nov 24 13:17:05 addons-647907 kubelet[1273]: I1124 13:17:05.779733    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-kn6st" podStartSLOduration=99.63817306 podStartE2EDuration="1m42.779710643s" podCreationTimestamp="2025-11-24 13:15:23 +0000 UTC" firstStartedPulling="2025-11-24 13:16:59.293228884 +0000 UTC m=+110.619215394" lastFinishedPulling="2025-11-24 13:17:02.434766458 +0000 UTC m=+113.760752977" observedRunningTime="2025-11-24 13:17:02.602517325 +0000 UTC m=+113.928503852" watchObservedRunningTime="2025-11-24 13:17:05.779710643 +0000 UTC m=+117.105697154"
	Nov 24 13:17:05 addons-647907 kubelet[1273]: I1124 13:17:05.839734    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcjg7\" (UniqueName: \"kubernetes.io/projected/4965dea7-ab1e-4641-94c0-23710f8285e0-kube-api-access-lcjg7\") pod \"busybox\" (UID: \"4965dea7-ab1e-4641-94c0-23710f8285e0\") " pod="default/busybox"
	Nov 24 13:17:05 addons-647907 kubelet[1273]: I1124 13:17:05.840006    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4965dea7-ab1e-4641-94c0-23710f8285e0-gcp-creds\") pod \"busybox\" (UID: \"4965dea7-ab1e-4641-94c0-23710f8285e0\") " pod="default/busybox"
	Nov 24 13:17:06 addons-647907 kubelet[1273]: W1124 13:17:06.116215    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/72292e2fa4c837496fd187b0d5c5858af15fb8e4b6965f101b72fc196db21cc3/crio-cd7fa17dc6ef2e7412d310e84dfc34e0e655af0c2160b9af566855406fe45238 WatchSource:0}: Error finding container cd7fa17dc6ef2e7412d310e84dfc34e0e655af0c2160b9af566855406fe45238: Status 404 returned error can't find the container with id cd7fa17dc6ef2e7412d310e84dfc34e0e655af0c2160b9af566855406fe45238
	Nov 24 13:17:08 addons-647907 kubelet[1273]: I1124 13:17:08.839784    1273 scope.go:117] "RemoveContainer" containerID="5b62e14ae59723df7f0e2dba5626fb54adf9ba67a0f5347169d67aab27e0611d"
	Nov 24 13:17:08 addons-647907 kubelet[1273]: E1124 13:17:08.977483    1273 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d5e95e2c49cb25a3c16833de232e2d96c49f693e973ad3e46fdd15ebd307ae57/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d5e95e2c49cb25a3c16833de232e2d96c49f693e973ad3e46fdd15ebd307ae57/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-pbxl7_3e90e248-2efe-49b5-80fc-bfa936939316/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-pbxl7_3e90e248-2efe-49b5-80fc-bfa936939316/patch/1.log: no such file or directory
	Nov 24 13:17:08 addons-647907 kubelet[1273]: E1124 13:17:08.981690    1273 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6df881df94d35fe2ca4307ad7e084d5d46a37b7240dedcea6685313ef27de71e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6df881df94d35fe2ca4307ad7e084d5d46a37b7240dedcea6685313ef27de71e/diff: no such file or directory, extraDiskErr: <nil>
	Nov 24 13:17:16 addons-647907 kubelet[1273]: I1124 13:17:16.034344    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=9.042123752 podStartE2EDuration="11.03432338s" podCreationTimestamp="2025-11-24 13:17:05 +0000 UTC" firstStartedPulling="2025-11-24 13:17:06.120732012 +0000 UTC m=+117.446718523" lastFinishedPulling="2025-11-24 13:17:08.112931632 +0000 UTC m=+119.438918151" observedRunningTime="2025-11-24 13:17:08.61567211 +0000 UTC m=+119.941658629" watchObservedRunningTime="2025-11-24 13:17:16.03432338 +0000 UTC m=+127.360309891"
	
	
	==> storage-provisioner [8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a] <==
	W1124 13:16:50.695401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:16:52.698018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:16:52.702632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:16:54.706265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:16:54.711013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:16:56.715071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:16:56.721990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:16:58.725716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:16:58.730955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:00.735311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:00.741516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:02.745588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:02.750636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:04.759500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:04.795034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:06.798626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:06.804098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:08.806760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:08.810601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:10.813811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:10.820420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:12.823942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:12.827692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:14.831159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:17:14.836351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-647907 -n addons-647907
helpers_test.go:269: (dbg) Run:  kubectl --context addons-647907 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-zgt7r ingress-nginx-admission-patch-ckrmb registry-creds-764b6fb674-pf26d
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-647907 describe pod ingress-nginx-admission-create-zgt7r ingress-nginx-admission-patch-ckrmb registry-creds-764b6fb674-pf26d
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-647907 describe pod ingress-nginx-admission-create-zgt7r ingress-nginx-admission-patch-ckrmb registry-creds-764b6fb674-pf26d: exit status 1 (85.462629ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zgt7r" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ckrmb" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-pf26d" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-647907 describe pod ingress-nginx-admission-create-zgt7r ingress-nginx-admission-patch-ckrmb registry-creds-764b6fb674-pf26d: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable headlamp --alsologtostderr -v=1: exit status 11 (255.753617ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:17:17.653574   11949 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:17:17.653817   11949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:17.653846   11949 out.go:374] Setting ErrFile to fd 2...
	I1124 13:17:17.653866   11949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:17.654290   11949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:17:17.654682   11949 mustload.go:66] Loading cluster: addons-647907
	I1124 13:17:17.655622   11949 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:17.655656   11949 addons.go:622] checking whether the cluster is paused
	I1124 13:17:17.655805   11949 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:17.655823   11949 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:17:17.656340   11949 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:17:17.678259   11949 ssh_runner.go:195] Run: systemctl --version
	I1124 13:17:17.678318   11949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:17:17.698645   11949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:17:17.801989   11949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:17:17.802071   11949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:17:17.831672   11949 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:17:17.831696   11949 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:17:17.831702   11949 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:17:17.831706   11949 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:17:17.831709   11949 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:17:17.831713   11949 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:17:17.831721   11949 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:17:17.831725   11949 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:17:17.831728   11949 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:17:17.831735   11949 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:17:17.831738   11949 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:17:17.831741   11949 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:17:17.831744   11949 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:17:17.831747   11949 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:17:17.831751   11949 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:17:17.831756   11949 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:17:17.831759   11949 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:17:17.831764   11949 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:17:17.831767   11949 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:17:17.831770   11949 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:17:17.831775   11949 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:17:17.831778   11949 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:17:17.831781   11949 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:17:17.831784   11949 cri.go:89] found id: ""
	I1124 13:17:17.831837   11949 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:17:17.846799   11949 out.go:203] 
	W1124 13:17:17.849745   11949 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:17:17.849776   11949 out.go:285] * 
	* 
	W1124 13:17:17.853989   11949 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:17:17.856868   11949 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.18s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-s88qb" [b5587937-b962-4c54-b4ff-dfac4b46354c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004584708s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (293.071797ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:17:35.516001   12422 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:17:35.516207   12422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:35.516221   12422 out.go:374] Setting ErrFile to fd 2...
	I1124 13:17:35.516227   12422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:35.516525   12422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:17:35.516839   12422 mustload.go:66] Loading cluster: addons-647907
	I1124 13:17:35.517248   12422 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:35.517269   12422 addons.go:622] checking whether the cluster is paused
	I1124 13:17:35.517427   12422 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:35.517443   12422 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:17:35.517988   12422 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:17:35.545375   12422 ssh_runner.go:195] Run: systemctl --version
	I1124 13:17:35.545437   12422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:17:35.564907   12422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:17:35.674170   12422 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:17:35.674273   12422 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:17:35.706039   12422 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:17:35.706080   12422 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:17:35.706085   12422 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:17:35.706089   12422 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:17:35.706092   12422 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:17:35.706115   12422 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:17:35.706123   12422 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:17:35.706127   12422 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:17:35.706130   12422 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:17:35.706137   12422 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:17:35.706156   12422 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:17:35.706166   12422 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:17:35.706169   12422 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:17:35.706173   12422 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:17:35.706176   12422 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:17:35.706197   12422 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:17:35.706207   12422 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:17:35.706213   12422 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:17:35.706217   12422 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:17:35.706239   12422 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:17:35.706245   12422 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:17:35.706249   12422 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:17:35.706252   12422 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:17:35.706255   12422 cri.go:89] found id: ""
	I1124 13:17:35.706333   12422 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:17:35.724334   12422 out.go:203] 
	W1124 13:17:35.728630   12422 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:17:35.728656   12422 out.go:285] * 
	* 
	W1124 13:17:35.732956   12422 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:17:35.736648   12422 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-647907 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-647907 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-647907 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [cf1575ea-61d3-4d87-b47b-f86e5177edf8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [cf1575ea-61d3-4d87-b47b-f86e5177edf8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [cf1575ea-61d3-4d87-b47b-f86e5177edf8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00244473s
addons_test.go:967: (dbg) Run:  kubectl --context addons-647907 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 ssh "cat /opt/local-path-provisioner/pvc-087c4ef5-30ca-4efc-9e47-792885953111_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-647907 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-647907 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (277.942562ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:17:39.513734   12599 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:17:39.513981   12599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:39.514012   12599 out.go:374] Setting ErrFile to fd 2...
	I1124 13:17:39.514032   12599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:39.514374   12599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:17:39.514721   12599 mustload.go:66] Loading cluster: addons-647907
	I1124 13:17:39.515157   12599 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:39.515198   12599 addons.go:622] checking whether the cluster is paused
	I1124 13:17:39.515348   12599 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:39.515392   12599 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:17:39.515943   12599 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:17:39.533667   12599 ssh_runner.go:195] Run: systemctl --version
	I1124 13:17:39.533720   12599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:17:39.560470   12599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:17:39.673819   12599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:17:39.673911   12599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:17:39.703951   12599 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:17:39.703974   12599 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:17:39.703979   12599 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:17:39.703995   12599 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:17:39.703999   12599 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:17:39.704003   12599 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:17:39.704006   12599 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:17:39.704010   12599 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:17:39.704013   12599 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:17:39.704019   12599 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:17:39.704022   12599 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:17:39.704025   12599 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:17:39.704028   12599 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:17:39.704032   12599 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:17:39.704035   12599 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:17:39.704040   12599 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:17:39.704047   12599 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:17:39.704051   12599 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:17:39.704054   12599 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:17:39.704057   12599 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:17:39.704062   12599 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:17:39.704069   12599 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:17:39.704072   12599 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:17:39.704075   12599 cri.go:89] found id: ""
	I1124 13:17:39.704164   12599 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:17:39.721295   12599 out.go:203] 
	W1124 13:17:39.724891   12599 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:17:39.724920   12599 out.go:285] * 
	* 
	W1124 13:17:39.729289   12599 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:17:39.733258   12599 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-dn469" [983203db-3631-4a06-80b8-418beda496e4] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004044424s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (305.613606ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:17:30.195716   12220 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:17:30.195962   12220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:30.195995   12220 out.go:374] Setting ErrFile to fd 2...
	I1124 13:17:30.196017   12220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:30.196313   12220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:17:30.196625   12220 mustload.go:66] Loading cluster: addons-647907
	I1124 13:17:30.197059   12220 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:30.197104   12220 addons.go:622] checking whether the cluster is paused
	I1124 13:17:30.197238   12220 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:30.197272   12220 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:17:30.197807   12220 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:17:30.217350   12220 ssh_runner.go:195] Run: systemctl --version
	I1124 13:17:30.217412   12220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:17:30.240939   12220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:17:30.358604   12220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:17:30.358691   12220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:17:30.399230   12220 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:17:30.399269   12220 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:17:30.399274   12220 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:17:30.399277   12220 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:17:30.399281   12220 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:17:30.399284   12220 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:17:30.399287   12220 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:17:30.399290   12220 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:17:30.399293   12220 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:17:30.399302   12220 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:17:30.399306   12220 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:17:30.399309   12220 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:17:30.399313   12220 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:17:30.399316   12220 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:17:30.399320   12220 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:17:30.399328   12220 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:17:30.399334   12220 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:17:30.399339   12220 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:17:30.399342   12220 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:17:30.399345   12220 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:17:30.399349   12220 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:17:30.399406   12220 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:17:30.399409   12220 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:17:30.399412   12220 cri.go:89] found id: ""
	I1124 13:17:30.399470   12220 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:17:30.420718   12220 out.go:203] 
	W1124 13:17:30.423495   12220 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:17:30.423521   12220 out.go:285] * 
	* 
	W1124 13:17:30.427886   12220 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:17:30.430800   12220 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-rt8kg" [dc43ae5e-1297-4e10-9573-2a12cefa4328] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003248663s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-647907 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-647907 addons disable yakd --alsologtostderr -v=1: exit status 11 (263.324008ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:17:23.917278   12025 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:17:23.917471   12025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:23.917483   12025 out.go:374] Setting ErrFile to fd 2...
	I1124 13:17:23.917490   12025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:17:23.917778   12025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:17:23.918123   12025 mustload.go:66] Loading cluster: addons-647907
	I1124 13:17:23.918540   12025 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:23.918560   12025 addons.go:622] checking whether the cluster is paused
	I1124 13:17:23.918730   12025 config.go:182] Loaded profile config "addons-647907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:17:23.918751   12025 host.go:66] Checking if "addons-647907" exists ...
	I1124 13:17:23.919464   12025 cli_runner.go:164] Run: docker container inspect addons-647907 --format={{.State.Status}}
	I1124 13:17:23.939697   12025 ssh_runner.go:195] Run: systemctl --version
	I1124 13:17:23.939764   12025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-647907
	I1124 13:17:23.960848   12025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/addons-647907/id_rsa Username:docker}
	I1124 13:17:24.066134   12025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:17:24.066216   12025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:17:24.098976   12025 cri.go:89] found id: "f5728fafdcfd6c412d1ef4060b50df18e35cf7f5dc2269c8aa70dd91319f0405"
	I1124 13:17:24.099000   12025 cri.go:89] found id: "8ae9f6e9c70db26bf1fb36c7b9a0254fe500c317f2a6945d2f07ea86680fb3c8"
	I1124 13:17:24.099006   12025 cri.go:89] found id: "5b3fa06bb0192f03af3aabc7e7c455544064d84030d3f3d8fa1eb708a7e3beb5"
	I1124 13:17:24.099010   12025 cri.go:89] found id: "973aff8a30a4f5e51842fa26a1fbb6214e33d96b452253dcff6908718c2afe7c"
	I1124 13:17:24.099019   12025 cri.go:89] found id: "6b55d5b71eba63c5fc2e54b61ab020786ab11eaa1c8a4cf025c69555ae454c9d"
	I1124 13:17:24.099023   12025 cri.go:89] found id: "7e0552d507b6a5d2225b7ea9bed55bfaec54aa5b07b536fb5623d66660e99d5c"
	I1124 13:17:24.099026   12025 cri.go:89] found id: "1868812a690656d2ef78f1d40726b50c55d3757008e08cd19a56abefc60b8f0b"
	I1124 13:17:24.099030   12025 cri.go:89] found id: "e66116781aa16aa7e3505b699baab2f91eb0fe84f2d48e96da8ffaf7c370d972"
	I1124 13:17:24.099034   12025 cri.go:89] found id: "2d0baf6e276932cf15682e3bdf601a7b4d5900d45814832a33695f1bc733e4f2"
	I1124 13:17:24.099040   12025 cri.go:89] found id: "0ba479f65d38f41fd3e6c32354ac0c909a63bf28c2527a3b30d15cae5d824845"
	I1124 13:17:24.099044   12025 cri.go:89] found id: "e67c8d2fe588f3f54af6367c17ccdcd8e19cb3743d88b7ec8c67fe42ca1a460f"
	I1124 13:17:24.099047   12025 cri.go:89] found id: "eb8a3b03da33f63ba2961d22861575e453cf82588de943e9e2fb7ddc1c122f8e"
	I1124 13:17:24.099050   12025 cri.go:89] found id: "7ee4d3e3512b4459fe8cdc4d2749e1dd56d9f5a3d0dc86f8cdd7fbb19e41a97f"
	I1124 13:17:24.099055   12025 cri.go:89] found id: "06594980d0770dcb6f5dcacad55f96761b6d2067a3e3ce0eb929042eff4265f7"
	I1124 13:17:24.099058   12025 cri.go:89] found id: "586f58fd71be7749f41db1a712c05bfa756f658f9c19cfb9aa671c9a8b754c34"
	I1124 13:17:24.099064   12025 cri.go:89] found id: "8a76716af61b8877cf1b7e3b54f8de99e52ffe5a1a222437f661359793b8bc5a"
	I1124 13:17:24.099070   12025 cri.go:89] found id: "612efd74b90cefa74785056cca24a59600ffe50c4cd3d5dc64320505db3e6d46"
	I1124 13:17:24.099074   12025 cri.go:89] found id: "c4588bdbb8946f09fafad25ba2e09f39ff759f2b8cad2f6dadade51f4c71ae52"
	I1124 13:17:24.099077   12025 cri.go:89] found id: "646560c4253bbd4f4ca0e15d6bfd9bc9d55fa7f25775099b26b7343265d565aa"
	I1124 13:17:24.099080   12025 cri.go:89] found id: "47bfa25635dec103e10675178c02de7fcaf080dccd97feecc568c666df4bea66"
	I1124 13:17:24.099085   12025 cri.go:89] found id: "9bbf65dfab06c0f9b0a6968500261eebc250dd8b47864e172e662f184a66f382"
	I1124 13:17:24.099088   12025 cri.go:89] found id: "9eb49b73252f434f02f8d18b6013cd3bdd4b40c3b4046a35100d5350b9164167"
	I1124 13:17:24.099091   12025 cri.go:89] found id: "448d21b7cb2225b5c6cb8829a9d9cc5985f1bee205abf34cb9730bc9b186bc0d"
	I1124 13:17:24.099094   12025 cri.go:89] found id: ""
	I1124 13:17:24.099152   12025 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:17:24.114363   12025 out.go:203] 
	W1124 13:17:24.117371   12025 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:17:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:17:24.117403   12025 out.go:285] * 
	* 
	W1124 13:17:24.121853   12025 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:17:24.124828   12025 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-647907 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-471703 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-471703 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-l5lkz" [38f1a276-9880-4d11-817c-08f82bab9329] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-471703 -n functional-471703
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-24 13:34:24.434623372 +0000 UTC m=+1227.747844691
functional_test.go:1645: (dbg) Run:  kubectl --context functional-471703 describe po hello-node-connect-7d85dfc575-l5lkz -n default
functional_test.go:1645: (dbg) kubectl --context functional-471703 describe po hello-node-connect-7d85dfc575-l5lkz -n default:
Name:             hello-node-connect-7d85dfc575-l5lkz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-471703/192.168.49.2
Start Time:       Mon, 24 Nov 2025 13:24:24 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t94pb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-t94pb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-l5lkz to functional-471703
Normal   Pulling    7m10s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)  kubelet            Error: ErrImagePull
Normal   BackOff    0s (x43 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     0s (x43 over 10m)    kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-471703 logs hello-node-connect-7d85dfc575-l5lkz -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-471703 logs hello-node-connect-7d85dfc575-l5lkz -n default: exit status 1 (98.492182ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-l5lkz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-471703 logs hello-node-connect-7d85dfc575-l5lkz -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-471703 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-l5lkz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-471703/192.168.49.2
Start Time:       Mon, 24 Nov 2025 13:24:24 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t94pb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-t94pb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-l5lkz to functional-471703
Normal   Pulling    7m10s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)  kubelet            Error: ErrImagePull
Normal   BackOff    0s (x43 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     0s (x43 over 10m)    kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-471703 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-471703 logs -l app=hello-node-connect: exit status 1 (87.160363ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-l5lkz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-471703 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-471703 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.195.205
IPs:                      10.101.195.205
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31619/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-471703
helpers_test.go:243: (dbg) docker inspect functional-471703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30025690bc20b08614806a58a6d882530615bef88c19257afd3f6bb47727d6b5",
	        "Created": "2025-11-24T13:21:22.002984401Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20196,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:21:22.077287354Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/30025690bc20b08614806a58a6d882530615bef88c19257afd3f6bb47727d6b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30025690bc20b08614806a58a6d882530615bef88c19257afd3f6bb47727d6b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/30025690bc20b08614806a58a6d882530615bef88c19257afd3f6bb47727d6b5/hosts",
	        "LogPath": "/var/lib/docker/containers/30025690bc20b08614806a58a6d882530615bef88c19257afd3f6bb47727d6b5/30025690bc20b08614806a58a6d882530615bef88c19257afd3f6bb47727d6b5-json.log",
	        "Name": "/functional-471703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-471703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-471703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "30025690bc20b08614806a58a6d882530615bef88c19257afd3f6bb47727d6b5",
	                "LowerDir": "/var/lib/docker/overlay2/4e315ab880d8e35e3e8c2b26eb9758a747eaf14582aa8ca54a3b054650c9abbf-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e315ab880d8e35e3e8c2b26eb9758a747eaf14582aa8ca54a3b054650c9abbf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e315ab880d8e35e3e8c2b26eb9758a747eaf14582aa8ca54a3b054650c9abbf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e315ab880d8e35e3e8c2b26eb9758a747eaf14582aa8ca54a3b054650c9abbf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-471703",
	                "Source": "/var/lib/docker/volumes/functional-471703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-471703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-471703",
	                "name.minikube.sigs.k8s.io": "functional-471703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ad511368cd37dec0178e5da22af44203baad39d20850a8448d2c6161e1342dcd",
	            "SandboxKey": "/var/run/docker/netns/ad511368cd37",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-471703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:d0:4a:bd:3d:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1d5218274f83fff815a9a5b854ba868a2f309a4a8eef3944bbdaf5cf8c773a0a",
	                    "EndpointID": "60ff89ce69b9dac9a29c5ea11a09240b86407d3a94c73e9f732dfb67a0c0693e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-471703",
	                        "30025690bc20"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-471703 -n functional-471703
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-471703 logs -n 25: (1.419446226s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-471703 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │                     │
	│ ssh     │ functional-471703 ssh -n functional-471703 sudo cat /home/docker/cp-test.txt                                                      │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ cp      │ functional-471703 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                         │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ ssh     │ functional-471703 ssh -n functional-471703 sudo cat /tmp/does/not/exist/cp-test.txt                                               │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ ssh     │ functional-471703 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ ssh     │ functional-471703 ssh -- ls -la /mount-9p                                                                                         │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ ssh     │ functional-471703 ssh cat /mount-9p/test-1763990651029827728                                                                      │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ ssh     │ functional-471703 ssh stat /mount-9p/created-by-test                                                                              │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ ssh     │ functional-471703 ssh stat /mount-9p/created-by-pod                                                                               │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ ssh     │ functional-471703 ssh sudo umount -f /mount-9p                                                                                    │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ mount   │ -p functional-471703 /tmp/TestFunctionalparallelMountCmdspecific-port1762126064/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │                     │
	│ ssh     │ functional-471703 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │                     │
	│ ssh     │ functional-471703 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ ssh     │ functional-471703 ssh -- ls -la /mount-9p                                                                                         │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ ssh     │ functional-471703 ssh sudo umount -f /mount-9p                                                                                    │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │                     │
	│ ssh     │ functional-471703 ssh findmnt -T /mount1                                                                                          │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │                     │
	│ mount   │ -p functional-471703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2246723281/001:/mount2 --alsologtostderr -v=1                │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │                     │
	│ mount   │ -p functional-471703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2246723281/001:/mount1 --alsologtostderr -v=1                │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │                     │
	│ mount   │ -p functional-471703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2246723281/001:/mount3 --alsologtostderr -v=1                │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │                     │
	│ ssh     │ functional-471703 ssh findmnt -T /mount1                                                                                          │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ ssh     │ functional-471703 ssh findmnt -T /mount2                                                                                          │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ ssh     │ functional-471703 ssh findmnt -T /mount3                                                                                          │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ mount   │ -p functional-471703 --kill=true                                                                                                  │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │                     │
	│ addons  │ functional-471703 addons list                                                                                                     │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	│ addons  │ functional-471703 addons list -o json                                                                                             │ functional-471703 │ jenkins │ v1.37.0 │ 24 Nov 25 13:24 UTC │ 24 Nov 25 13:24 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:23:13
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:23:13.686114   24360 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:23:13.686226   24360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:23:13.686230   24360 out.go:374] Setting ErrFile to fd 2...
	I1124 13:23:13.686234   24360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:23:13.686565   24360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:23:13.686964   24360 out.go:368] Setting JSON to false
	I1124 13:23:13.688022   24360 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3945,"bootTime":1763986649,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 13:23:13.688079   24360 start.go:143] virtualization:  
	I1124 13:23:13.691671   24360 out.go:179] * [functional-471703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 13:23:13.695558   24360 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:23:13.695630   24360 notify.go:221] Checking for updates...
	I1124 13:23:13.702154   24360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:23:13.705045   24360 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 13:23:13.707898   24360 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 13:23:13.710888   24360 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 13:23:13.713707   24360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:23:13.716896   24360 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:23:13.716984   24360 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:23:13.750149   24360 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:23:13.750280   24360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:23:13.809338   24360 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-24 13:23:13.800202289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:23:13.809431   24360 docker.go:319] overlay module found
	I1124 13:23:13.812593   24360 out.go:179] * Using the docker driver based on existing profile
	I1124 13:23:13.815425   24360 start.go:309] selected driver: docker
	I1124 13:23:13.815435   24360 start.go:927] validating driver "docker" against &{Name:functional-471703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-471703 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:23:13.815529   24360 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:23:13.815630   24360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:23:13.880304   24360 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-24 13:23:13.870105332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:23:13.880690   24360 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:23:13.880715   24360 cni.go:84] Creating CNI manager for ""
	I1124 13:23:13.880770   24360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:23:13.880815   24360 start.go:353] cluster config:
	{Name:functional-471703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-471703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:23:13.885968   24360 out.go:179] * Starting "functional-471703" primary control-plane node in "functional-471703" cluster
	I1124 13:23:13.888766   24360 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:23:13.891765   24360 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:23:13.894695   24360 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:23:13.894734   24360 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 13:23:13.894743   24360 cache.go:65] Caching tarball of preloaded images
	I1124 13:23:13.894770   24360 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:23:13.894842   24360 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 13:23:13.894851   24360 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:23:13.894974   24360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/config.json ...
	I1124 13:23:13.915093   24360 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:23:13.915104   24360 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:23:13.915126   24360 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:23:13.915160   24360 start.go:360] acquireMachinesLock for functional-471703: {Name:mk7bd1b79981a5879c2a9932540643455e81bc75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:23:13.915251   24360 start.go:364] duration metric: took 72.518µs to acquireMachinesLock for "functional-471703"
	I1124 13:23:13.915270   24360 start.go:96] Skipping create...Using existing machine configuration
	I1124 13:23:13.915274   24360 fix.go:54] fixHost starting: 
	I1124 13:23:13.915560   24360 cli_runner.go:164] Run: docker container inspect functional-471703 --format={{.State.Status}}
	I1124 13:23:13.932583   24360 fix.go:112] recreateIfNeeded on functional-471703: state=Running err=<nil>
	W1124 13:23:13.932610   24360 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 13:23:13.935702   24360 out.go:252] * Updating the running docker "functional-471703" container ...
	I1124 13:23:13.935722   24360 machine.go:94] provisionDockerMachine start ...
	I1124 13:23:13.935812   24360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:23:13.952921   24360 main.go:143] libmachine: Using SSH client type: native
	I1124 13:23:13.953240   24360 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1124 13:23:13.953247   24360 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:23:14.103070   24360 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-471703
	
	I1124 13:23:14.103084   24360 ubuntu.go:182] provisioning hostname "functional-471703"
	I1124 13:23:14.103144   24360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:23:14.121137   24360 main.go:143] libmachine: Using SSH client type: native
	I1124 13:23:14.121443   24360 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1124 13:23:14.121451   24360 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-471703 && echo "functional-471703" | sudo tee /etc/hostname
	I1124 13:23:14.285717   24360 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-471703
	
	I1124 13:23:14.285782   24360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:23:14.306289   24360 main.go:143] libmachine: Using SSH client type: native
	I1124 13:23:14.306594   24360 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1124 13:23:14.306607   24360 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-471703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-471703/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-471703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:23:14.460203   24360 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:23:14.460219   24360 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 13:23:14.460238   24360 ubuntu.go:190] setting up certificates
	I1124 13:23:14.460245   24360 provision.go:84] configureAuth start
	I1124 13:23:14.460309   24360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-471703
	I1124 13:23:14.489884   24360 provision.go:143] copyHostCerts
	I1124 13:23:14.489948   24360 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 13:23:14.489959   24360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 13:23:14.490043   24360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 13:23:14.490143   24360 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 13:23:14.490147   24360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 13:23:14.490174   24360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 13:23:14.490235   24360 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 13:23:14.490238   24360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 13:23:14.490262   24360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 13:23:14.490317   24360 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.functional-471703 san=[127.0.0.1 192.168.49.2 functional-471703 localhost minikube]
	I1124 13:23:14.745304   24360 provision.go:177] copyRemoteCerts
	I1124 13:23:14.745354   24360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:23:14.745390   24360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:23:14.763200   24360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
	I1124 13:23:14.872847   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 13:23:14.890644   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:23:14.908914   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:23:14.926429   24360 provision.go:87] duration metric: took 466.162327ms to configureAuth
	I1124 13:23:14.926445   24360 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:23:14.926659   24360 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:23:14.926758   24360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:23:14.944497   24360 main.go:143] libmachine: Using SSH client type: native
	I1124 13:23:14.944800   24360 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1124 13:23:14.944812   24360 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:23:20.379514   24360 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:23:20.379528   24360 machine.go:97] duration metric: took 6.443800003s to provisionDockerMachine
	I1124 13:23:20.379538   24360 start.go:293] postStartSetup for "functional-471703" (driver="docker")
	I1124 13:23:20.379548   24360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:23:20.379624   24360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:23:20.379660   24360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:23:20.398128   24360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
	I1124 13:23:20.503623   24360 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:23:20.507005   24360 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:23:20.507022   24360 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:23:20.507032   24360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 13:23:20.507084   24360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 13:23:20.507157   24360 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 13:23:20.507229   24360 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/test/nested/copy/4611/hosts -> hosts in /etc/test/nested/copy/4611
	I1124 13:23:20.507277   24360 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4611
	I1124 13:23:20.514933   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 13:23:20.532745   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/test/nested/copy/4611/hosts --> /etc/test/nested/copy/4611/hosts (40 bytes)
	I1124 13:23:20.551220   24360 start.go:296] duration metric: took 171.668595ms for postStartSetup
	I1124 13:23:20.551291   24360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:23:20.551329   24360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:23:20.568997   24360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
	I1124 13:23:20.672464   24360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:23:20.677590   24360 fix.go:56] duration metric: took 6.76230858s for fixHost
	I1124 13:23:20.677606   24360 start.go:83] releasing machines lock for "functional-471703", held for 6.762348006s
	I1124 13:23:20.677695   24360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-471703
	I1124 13:23:20.694582   24360 ssh_runner.go:195] Run: cat /version.json
	I1124 13:23:20.694632   24360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:23:20.694652   24360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:23:20.694700   24360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:23:20.714330   24360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
	I1124 13:23:20.714469   24360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
	I1124 13:23:20.819148   24360 ssh_runner.go:195] Run: systemctl --version
	I1124 13:23:20.928107   24360 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:23:20.982063   24360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:23:20.986947   24360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:23:20.987006   24360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:23:20.994810   24360 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 13:23:20.994824   24360 start.go:496] detecting cgroup driver to use...
	I1124 13:23:20.994854   24360 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 13:23:20.994900   24360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:23:21.012351   24360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:23:21.026586   24360 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:23:21.026643   24360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:23:21.043820   24360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:23:21.058278   24360 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:23:21.214846   24360 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:23:21.354761   24360 docker.go:234] disabling docker service ...
	I1124 13:23:21.354832   24360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:23:21.372145   24360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:23:21.385471   24360 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:23:21.532115   24360 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:23:21.666587   24360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:23:21.680346   24360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:23:21.695592   24360 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 13:23:21.695649   24360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:23:21.705060   24360 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 13:23:21.705120   24360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:23:21.714362   24360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:23:21.724325   24360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:23:21.733189   24360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:23:21.741427   24360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:23:21.750689   24360 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:23:21.759052   24360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:23:21.768388   24360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:23:21.776016   24360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:23:21.783645   24360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:23:21.923951   24360 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:23:26.793391   24360 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.86941693s)
	I1124 13:23:26.793407   24360 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:23:26.793461   24360 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:23:26.797156   24360 start.go:564] Will wait 60s for crictl version
	I1124 13:23:26.797218   24360 ssh_runner.go:195] Run: which crictl
	I1124 13:23:26.800657   24360 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:23:26.829953   24360 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:23:26.830049   24360 ssh_runner.go:195] Run: crio --version
	I1124 13:23:26.860079   24360 ssh_runner.go:195] Run: crio --version
	I1124 13:23:26.893738   24360 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 13:23:26.896764   24360 cli_runner.go:164] Run: docker network inspect functional-471703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:23:26.912868   24360 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 13:23:26.920145   24360 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1124 13:23:26.923107   24360 kubeadm.go:884] updating cluster {Name:functional-471703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-471703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:23:26.923243   24360 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:23:26.923317   24360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:23:26.959157   24360 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:23:26.959169   24360 crio.go:433] Images already preloaded, skipping extraction
	I1124 13:23:26.959221   24360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:23:26.986710   24360 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:23:26.986721   24360 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:23:26.986727   24360 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1124 13:23:26.986844   24360 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-471703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-471703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:23:26.986930   24360 ssh_runner.go:195] Run: crio config
	I1124 13:23:27.059135   24360 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1124 13:23:27.059178   24360 cni.go:84] Creating CNI manager for ""
	I1124 13:23:27.059187   24360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:23:27.059201   24360 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:23:27.059229   24360 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-471703 NodeName:functional-471703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:23:27.059431   24360 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-471703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:23:27.059504   24360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:23:27.067085   24360 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:23:27.067141   24360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:23:27.075332   24360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 13:23:27.088337   24360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:23:27.101232   24360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1124 13:23:27.114502   24360 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:23:27.118119   24360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:23:27.262360   24360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:23:27.276194   24360 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703 for IP: 192.168.49.2
	I1124 13:23:27.276204   24360 certs.go:195] generating shared ca certs ...
	I1124 13:23:27.276218   24360 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:23:27.276351   24360 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 13:23:27.276389   24360 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 13:23:27.276395   24360 certs.go:257] generating profile certs ...
	I1124 13:23:27.276486   24360 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.key
	I1124 13:23:27.276537   24360 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/apiserver.key.3ceaa230
	I1124 13:23:27.276580   24360 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/proxy-client.key
	I1124 13:23:27.276688   24360 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 13:23:27.276756   24360 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 13:23:27.276763   24360 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:23:27.276790   24360 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:23:27.276818   24360 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:23:27.276842   24360 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 13:23:27.276888   24360 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 13:23:27.277479   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:23:27.297729   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 13:23:27.316344   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:23:27.334607   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:23:27.352571   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 13:23:27.371004   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:23:27.388651   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:23:27.405925   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 13:23:27.423963   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:23:27.441719   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 13:23:27.460998   24360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 13:23:27.478960   24360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:23:27.491786   24360 ssh_runner.go:195] Run: openssl version
	I1124 13:23:27.497915   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 13:23:27.506647   24360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 13:23:27.511255   24360 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 13:23:27.511316   24360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 13:23:27.559322   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:23:27.567630   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:23:27.576725   24360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:23:27.580387   24360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:23:27.580447   24360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:23:27.621068   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:23:27.629011   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 13:23:27.637194   24360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 13:23:27.640939   24360 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 13:23:27.641002   24360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 13:23:27.681644   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 13:23:27.689178   24360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:23:27.692737   24360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 13:23:27.733171   24360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 13:23:27.773831   24360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 13:23:27.814284   24360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 13:23:27.855155   24360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 13:23:27.896476   24360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 13:23:27.937399   24360 kubeadm.go:401] StartCluster: {Name:functional-471703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-471703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:23:27.937477   24360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:23:27.937547   24360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:23:27.968996   24360 cri.go:89] found id: "a69058794a0e4ff800e63f86ff8d9884c8f482f5ed68f560c71d0670674ed3e3"
	I1124 13:23:27.969010   24360 cri.go:89] found id: "68a3a2d8d777cf152b00ac6aa8b413eb8c1f7f11e28bb5c2304dcacb254eae67"
	I1124 13:23:27.969013   24360 cri.go:89] found id: "444d1ac290bc5d76ddf4282411f1c346b7d64bbc9da03b44be199f2860bf84d2"
	I1124 13:23:27.969015   24360 cri.go:89] found id: "067193011f9886470b6c881f3c0da5e260cffbc424b84f2e82c2b9ee677878de"
	I1124 13:23:27.969017   24360 cri.go:89] found id: "98cf5592f21207f801aac9aab46072377e399092db918d85a163bcc8f6495887"
	I1124 13:23:27.969035   24360 cri.go:89] found id: "52cb28626ef4c494529889ecdd128a18b2efe73d12cb776090a784b9bfa9c784"
	I1124 13:23:27.969037   24360 cri.go:89] found id: "31bd0f2e8fa40bab7da7de6709c33e34794bb10006d371481970728fafe50345"
	I1124 13:23:27.969039   24360 cri.go:89] found id: "c7d48b502ee2949783d0790eae05582b7e705ca4b38e3d9d697c46f7934e764a"
	I1124 13:23:27.969041   24360 cri.go:89] found id: ""
	I1124 13:23:27.969094   24360 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 13:23:27.980287   24360 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:23:27Z" level=error msg="open /run/runc: no such file or directory"
	I1124 13:23:27.980366   24360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:23:27.988185   24360 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 13:23:27.988194   24360 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 13:23:27.988246   24360 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 13:23:27.995607   24360 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:23:27.996188   24360 kubeconfig.go:125] found "functional-471703" server: "https://192.168.49.2:8441"
	I1124 13:23:27.997572   24360 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 13:23:28.012014   24360 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-24 13:21:30.528576473 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-24 13:23:27.109516664 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1124 13:23:28.012025   24360 kubeadm.go:1161] stopping kube-system containers ...
	I1124 13:23:28.012037   24360 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 13:23:28.012108   24360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:23:28.041431   24360 cri.go:89] found id: "a69058794a0e4ff800e63f86ff8d9884c8f482f5ed68f560c71d0670674ed3e3"
	I1124 13:23:28.041444   24360 cri.go:89] found id: "68a3a2d8d777cf152b00ac6aa8b413eb8c1f7f11e28bb5c2304dcacb254eae67"
	I1124 13:23:28.041448   24360 cri.go:89] found id: "444d1ac290bc5d76ddf4282411f1c346b7d64bbc9da03b44be199f2860bf84d2"
	I1124 13:23:28.041451   24360 cri.go:89] found id: "067193011f9886470b6c881f3c0da5e260cffbc424b84f2e82c2b9ee677878de"
	I1124 13:23:28.041453   24360 cri.go:89] found id: "98cf5592f21207f801aac9aab46072377e399092db918d85a163bcc8f6495887"
	I1124 13:23:28.041456   24360 cri.go:89] found id: "52cb28626ef4c494529889ecdd128a18b2efe73d12cb776090a784b9bfa9c784"
	I1124 13:23:28.041458   24360 cri.go:89] found id: "31bd0f2e8fa40bab7da7de6709c33e34794bb10006d371481970728fafe50345"
	I1124 13:23:28.041460   24360 cri.go:89] found id: "c7d48b502ee2949783d0790eae05582b7e705ca4b38e3d9d697c46f7934e764a"
	I1124 13:23:28.041463   24360 cri.go:89] found id: ""
	I1124 13:23:28.041467   24360 cri.go:252] Stopping containers: [a69058794a0e4ff800e63f86ff8d9884c8f482f5ed68f560c71d0670674ed3e3 68a3a2d8d777cf152b00ac6aa8b413eb8c1f7f11e28bb5c2304dcacb254eae67 444d1ac290bc5d76ddf4282411f1c346b7d64bbc9da03b44be199f2860bf84d2 067193011f9886470b6c881f3c0da5e260cffbc424b84f2e82c2b9ee677878de 98cf5592f21207f801aac9aab46072377e399092db918d85a163bcc8f6495887 52cb28626ef4c494529889ecdd128a18b2efe73d12cb776090a784b9bfa9c784 31bd0f2e8fa40bab7da7de6709c33e34794bb10006d371481970728fafe50345 c7d48b502ee2949783d0790eae05582b7e705ca4b38e3d9d697c46f7934e764a]
	I1124 13:23:28.041529   24360 ssh_runner.go:195] Run: which crictl
	I1124 13:23:28.045370   24360 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 a69058794a0e4ff800e63f86ff8d9884c8f482f5ed68f560c71d0670674ed3e3 68a3a2d8d777cf152b00ac6aa8b413eb8c1f7f11e28bb5c2304dcacb254eae67 444d1ac290bc5d76ddf4282411f1c346b7d64bbc9da03b44be199f2860bf84d2 067193011f9886470b6c881f3c0da5e260cffbc424b84f2e82c2b9ee677878de 98cf5592f21207f801aac9aab46072377e399092db918d85a163bcc8f6495887 52cb28626ef4c494529889ecdd128a18b2efe73d12cb776090a784b9bfa9c784 31bd0f2e8fa40bab7da7de6709c33e34794bb10006d371481970728fafe50345 c7d48b502ee2949783d0790eae05582b7e705ca4b38e3d9d697c46f7934e764a
	I1124 13:23:28.108714   24360 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 13:23:28.242457   24360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:23:28.250325   24360 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 24 13:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 24 13:21 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 24 13:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Nov 24 13:21 /etc/kubernetes/scheduler.conf
	
	I1124 13:23:28.250385   24360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1124 13:23:28.258251   24360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1124 13:23:28.265978   24360 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:23:28.266033   24360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:23:28.273497   24360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1124 13:23:28.281238   24360 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:23:28.281291   24360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:23:28.288862   24360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1124 13:23:28.296271   24360 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:23:28.296325   24360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:23:28.303409   24360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:23:28.311078   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 13:23:28.359007   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 13:23:32.108405   24360 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.749373856s)
	I1124 13:23:32.108463   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 13:23:32.320709   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 13:23:32.397875   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 13:23:32.464570   24360 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:23:32.464638   24360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:23:32.965340   24360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:23:32.990008   24360 api_server.go:72] duration metric: took 525.437267ms to wait for apiserver process to appear ...
	I1124 13:23:32.990022   24360 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:23:32.990039   24360 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 13:23:32.990330   24360 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I1124 13:23:33.490524   24360 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 13:23:37.757160   24360 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 13:23:37.757181   24360 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 13:23:37.757193   24360 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 13:23:37.817820   24360 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 13:23:37.817837   24360 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 13:23:37.990077   24360 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 13:23:38.001523   24360 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 13:23:38.001546   24360 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 13:23:38.490105   24360 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 13:23:38.504500   24360 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 13:23:38.504515   24360 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 13:23:38.990148   24360 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 13:23:38.999981   24360 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 13:23:38.999998   24360 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 13:23:39.490656   24360 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 13:23:39.498920   24360 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1124 13:23:39.513366   24360 api_server.go:141] control plane version: v1.34.1
	I1124 13:23:39.513389   24360 api_server.go:131] duration metric: took 6.523360611s to wait for apiserver health ...
	I1124 13:23:39.513396   24360 cni.go:84] Creating CNI manager for ""
	I1124 13:23:39.513401   24360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:23:39.517342   24360 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:23:39.520506   24360 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:23:39.524998   24360 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:23:39.525009   24360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:23:39.540507   24360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:23:40.079465   24360 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:23:40.082710   24360 system_pods.go:59] 8 kube-system pods found
	I1124 13:23:40.082739   24360 system_pods.go:61] "coredns-66bc5c9577-thd8f" [1a580686-4341-49c2-985f-0d9298d0ded8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:23:40.082746   24360 system_pods.go:61] "etcd-functional-471703" [468a7505-4355-4a8e-8f9a-bbf58f9dde74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 13:23:40.082750   24360 system_pods.go:61] "kindnet-jbj2n" [66fa92df-228d-4f04-8f01-9672bd9aa0ee] Running
	I1124 13:23:40.082756   24360 system_pods.go:61] "kube-apiserver-functional-471703" [71850dd1-f920-46cb-931a-670dc910a076] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 13:23:40.082762   24360 system_pods.go:61] "kube-controller-manager-functional-471703" [0b104ac5-df64-451b-8665-aeba69e5fc02] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 13:23:40.082766   24360 system_pods.go:61] "kube-proxy-2klr6" [fe33647a-0aab-49e3-80ad-3c37a40ac4ed] Running
	I1124 13:23:40.082771   24360 system_pods.go:61] "kube-scheduler-functional-471703" [57e4fc35-41ae-4dcc-b92a-cfbf94af2842] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 13:23:40.082773   24360 system_pods.go:61] "storage-provisioner" [62fddaeb-cb2e-4b3a-b5a4-233e0d1e9227] Running
	I1124 13:23:40.082778   24360 system_pods.go:74] duration metric: took 3.302262ms to wait for pod list to return data ...
	I1124 13:23:40.082784   24360 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:23:40.085415   24360 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 13:23:40.085435   24360 node_conditions.go:123] node cpu capacity is 2
	I1124 13:23:40.085445   24360 node_conditions.go:105] duration metric: took 2.658179ms to run NodePressure ...
	I1124 13:23:40.085504   24360 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 13:23:40.335181   24360 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1124 13:23:40.338616   24360 kubeadm.go:744] kubelet initialised
	I1124 13:23:40.338627   24360 kubeadm.go:745] duration metric: took 3.434473ms waiting for restarted kubelet to initialise ...
	I1124 13:23:40.338640   24360 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:23:40.348223   24360 ops.go:34] apiserver oom_adj: -16
	I1124 13:23:40.348234   24360 kubeadm.go:602] duration metric: took 12.360034907s to restartPrimaryControlPlane
	I1124 13:23:40.348242   24360 kubeadm.go:403] duration metric: took 12.410854122s to StartCluster
	I1124 13:23:40.348257   24360 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:23:40.348322   24360 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 13:23:40.348890   24360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:23:40.349095   24360 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:23:40.349345   24360 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:23:40.349382   24360 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:23:40.349480   24360 addons.go:70] Setting storage-provisioner=true in profile "functional-471703"
	I1124 13:23:40.349495   24360 addons.go:239] Setting addon storage-provisioner=true in "functional-471703"
	I1124 13:23:40.349497   24360 addons.go:70] Setting default-storageclass=true in profile "functional-471703"
	W1124 13:23:40.349500   24360 addons.go:248] addon storage-provisioner should already be in state true
	I1124 13:23:40.349513   24360 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-471703"
	I1124 13:23:40.349520   24360 host.go:66] Checking if "functional-471703" exists ...
	I1124 13:23:40.349844   24360 cli_runner.go:164] Run: docker container inspect functional-471703 --format={{.State.Status}}
	I1124 13:23:40.349927   24360 cli_runner.go:164] Run: docker container inspect functional-471703 --format={{.State.Status}}
	I1124 13:23:40.353224   24360 out.go:179] * Verifying Kubernetes components...
	I1124 13:23:40.356163   24360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:23:40.389397   24360 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:23:40.395481   24360 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:23:40.395493   24360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:23:40.395555   24360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:23:40.397620   24360 addons.go:239] Setting addon default-storageclass=true in "functional-471703"
	W1124 13:23:40.397630   24360 addons.go:248] addon default-storageclass should already be in state true
	I1124 13:23:40.397652   24360 host.go:66] Checking if "functional-471703" exists ...
	I1124 13:23:40.398058   24360 cli_runner.go:164] Run: docker container inspect functional-471703 --format={{.State.Status}}
	I1124 13:23:40.426720   24360 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:23:40.426736   24360 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:23:40.426796   24360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:23:40.444611   24360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
	I1124 13:23:40.461061   24360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
	I1124 13:23:40.573441   24360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:23:40.587465   24360 node_ready.go:35] waiting up to 6m0s for node "functional-471703" to be "Ready" ...
	I1124 13:23:40.590610   24360 node_ready.go:49] node "functional-471703" is "Ready"
	I1124 13:23:40.590626   24360 node_ready.go:38] duration metric: took 3.131988ms for node "functional-471703" to be "Ready" ...
	I1124 13:23:40.590638   24360 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:23:40.590691   24360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:23:40.594523   24360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:23:40.608560   24360 api_server.go:72] duration metric: took 259.440777ms to wait for apiserver process to appear ...
	I1124 13:23:40.608574   24360 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:23:40.608592   24360 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1124 13:23:40.610366   24360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:23:40.621526   24360 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1124 13:23:40.626132   24360 api_server.go:141] control plane version: v1.34.1
	I1124 13:23:40.626149   24360 api_server.go:131] duration metric: took 17.569068ms to wait for apiserver health ...
	I1124 13:23:40.626156   24360 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:23:40.630631   24360 system_pods.go:59] 8 kube-system pods found
	I1124 13:23:40.630650   24360 system_pods.go:61] "coredns-66bc5c9577-thd8f" [1a580686-4341-49c2-985f-0d9298d0ded8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:23:40.630659   24360 system_pods.go:61] "etcd-functional-471703" [468a7505-4355-4a8e-8f9a-bbf58f9dde74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 13:23:40.630663   24360 system_pods.go:61] "kindnet-jbj2n" [66fa92df-228d-4f04-8f01-9672bd9aa0ee] Running
	I1124 13:23:40.630682   24360 system_pods.go:61] "kube-apiserver-functional-471703" [71850dd1-f920-46cb-931a-670dc910a076] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 13:23:40.630688   24360 system_pods.go:61] "kube-controller-manager-functional-471703" [0b104ac5-df64-451b-8665-aeba69e5fc02] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 13:23:40.630702   24360 system_pods.go:61] "kube-proxy-2klr6" [fe33647a-0aab-49e3-80ad-3c37a40ac4ed] Running
	I1124 13:23:40.630707   24360 system_pods.go:61] "kube-scheduler-functional-471703" [57e4fc35-41ae-4dcc-b92a-cfbf94af2842] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 13:23:40.630710   24360 system_pods.go:61] "storage-provisioner" [62fddaeb-cb2e-4b3a-b5a4-233e0d1e9227] Running
	I1124 13:23:40.630715   24360 system_pods.go:74] duration metric: took 4.5533ms to wait for pod list to return data ...
	I1124 13:23:40.630722   24360 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:23:40.633715   24360 default_sa.go:45] found service account: "default"
	I1124 13:23:40.633729   24360 default_sa.go:55] duration metric: took 3.001903ms for default service account to be created ...
	I1124 13:23:40.633737   24360 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:23:40.640819   24360 system_pods.go:86] 8 kube-system pods found
	I1124 13:23:40.640838   24360 system_pods.go:89] "coredns-66bc5c9577-thd8f" [1a580686-4341-49c2-985f-0d9298d0ded8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:23:40.640846   24360 system_pods.go:89] "etcd-functional-471703" [468a7505-4355-4a8e-8f9a-bbf58f9dde74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 13:23:40.640851   24360 system_pods.go:89] "kindnet-jbj2n" [66fa92df-228d-4f04-8f01-9672bd9aa0ee] Running
	I1124 13:23:40.640858   24360 system_pods.go:89] "kube-apiserver-functional-471703" [71850dd1-f920-46cb-931a-670dc910a076] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 13:23:40.640864   24360 system_pods.go:89] "kube-controller-manager-functional-471703" [0b104ac5-df64-451b-8665-aeba69e5fc02] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 13:23:40.640867   24360 system_pods.go:89] "kube-proxy-2klr6" [fe33647a-0aab-49e3-80ad-3c37a40ac4ed] Running
	I1124 13:23:40.640877   24360 system_pods.go:89] "kube-scheduler-functional-471703" [57e4fc35-41ae-4dcc-b92a-cfbf94af2842] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 13:23:40.640881   24360 system_pods.go:89] "storage-provisioner" [62fddaeb-cb2e-4b3a-b5a4-233e0d1e9227] Running
	I1124 13:23:40.640888   24360 system_pods.go:126] duration metric: took 7.146059ms to wait for k8s-apps to be running ...
	I1124 13:23:40.640894   24360 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:23:40.640953   24360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:23:41.478353   24360 system_svc.go:56] duration metric: took 837.451492ms WaitForService to wait for kubelet
	I1124 13:23:41.478366   24360 kubeadm.go:587] duration metric: took 1.129252192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:23:41.478382   24360 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:23:41.485715   24360 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 13:23:41.485729   24360 node_conditions.go:123] node cpu capacity is 2
	I1124 13:23:41.485738   24360 node_conditions.go:105] duration metric: took 7.352756ms to run NodePressure ...
	I1124 13:23:41.485749   24360 start.go:242] waiting for startup goroutines ...
	I1124 13:23:41.494727   24360 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:23:41.497680   24360 addons.go:530] duration metric: took 1.148282588s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:23:41.497720   24360 start.go:247] waiting for cluster config update ...
	I1124 13:23:41.497731   24360 start.go:256] writing updated cluster config ...
	I1124 13:23:41.498007   24360 ssh_runner.go:195] Run: rm -f paused
	I1124 13:23:41.501956   24360 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:23:41.505735   24360 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-thd8f" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 13:23:43.511714   24360 pod_ready.go:104] pod "coredns-66bc5c9577-thd8f" is not "Ready", error: <nil>
	W1124 13:23:45.512004   24360 pod_ready.go:104] pod "coredns-66bc5c9577-thd8f" is not "Ready", error: <nil>
	I1124 13:23:46.022165   24360 pod_ready.go:94] pod "coredns-66bc5c9577-thd8f" is "Ready"
	I1124 13:23:46.022180   24360 pod_ready.go:86] duration metric: took 4.516430218s for pod "coredns-66bc5c9577-thd8f" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:23:46.024642   24360 pod_ready.go:83] waiting for pod "etcd-functional-471703" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:23:46.029750   24360 pod_ready.go:94] pod "etcd-functional-471703" is "Ready"
	I1124 13:23:46.029765   24360 pod_ready.go:86] duration metric: took 5.110473ms for pod "etcd-functional-471703" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:23:46.032500   24360 pod_ready.go:83] waiting for pod "kube-apiserver-functional-471703" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:23:46.037441   24360 pod_ready.go:94] pod "kube-apiserver-functional-471703" is "Ready"
	I1124 13:23:46.037456   24360 pod_ready.go:86] duration metric: took 4.941939ms for pod "kube-apiserver-functional-471703" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:23:46.040373   24360 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-471703" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 13:23:48.046072   24360 pod_ready.go:104] pod "kube-controller-manager-functional-471703" is not "Ready", error: <nil>
	W1124 13:23:50.046342   24360 pod_ready.go:104] pod "kube-controller-manager-functional-471703" is not "Ready", error: <nil>
	I1124 13:23:52.056157   24360 pod_ready.go:94] pod "kube-controller-manager-functional-471703" is "Ready"
	I1124 13:23:52.056172   24360 pod_ready.go:86] duration metric: took 6.015786173s for pod "kube-controller-manager-functional-471703" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:23:52.060830   24360 pod_ready.go:83] waiting for pod "kube-proxy-2klr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:23:52.069788   24360 pod_ready.go:94] pod "kube-proxy-2klr6" is "Ready"
	I1124 13:23:52.069806   24360 pod_ready.go:86] duration metric: took 8.959407ms for pod "kube-proxy-2klr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:23:52.076115   24360 pod_ready.go:83] waiting for pod "kube-scheduler-functional-471703" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:23:52.220536   24360 pod_ready.go:94] pod "kube-scheduler-functional-471703" is "Ready"
	I1124 13:23:52.220550   24360 pod_ready.go:86] duration metric: took 144.422015ms for pod "kube-scheduler-functional-471703" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:23:52.220561   24360 pod_ready.go:40] duration metric: took 10.718582742s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:23:52.270969   24360 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 13:23:52.276096   24360 out.go:179] * Done! kubectl is now configured to use "functional-471703" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 13:24:32 functional-471703 crio[3550]: time="2025-11-24T13:24:32.430702706Z" level=info msg="Stopped pod sandbox (already stopped): a6c6abc73dbe01a9057b35abd6c25aa1210ccd0c21339a349f6340078d226c9b" id=4d7c1906-6fff-4241-b402-e5dad0f2cc49 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 13:24:32 functional-471703 crio[3550]: time="2025-11-24T13:24:32.431065507Z" level=info msg="Removing pod sandbox: a6c6abc73dbe01a9057b35abd6c25aa1210ccd0c21339a349f6340078d226c9b" id=2522d854-a23d-42b2-aa0d-789add6c2b78 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 13:24:32 functional-471703 crio[3550]: time="2025-11-24T13:24:32.434885377Z" level=info msg="Removed pod sandbox: a6c6abc73dbe01a9057b35abd6c25aa1210ccd0c21339a349f6340078d226c9b" id=2522d854-a23d-42b2-aa0d-789add6c2b78 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 13:24:32 functional-471703 crio[3550]: time="2025-11-24T13:24:32.435477308Z" level=info msg="Stopping pod sandbox: 1df06ae9cd25a6678e8cd4d2c00e2f6bbe06ae1232090f20e477db2c4564dc60" id=b476377d-b298-444b-bd68-0b668a052db4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 13:24:32 functional-471703 crio[3550]: time="2025-11-24T13:24:32.435525719Z" level=info msg="Stopped pod sandbox (already stopped): 1df06ae9cd25a6678e8cd4d2c00e2f6bbe06ae1232090f20e477db2c4564dc60" id=b476377d-b298-444b-bd68-0b668a052db4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 13:24:32 functional-471703 crio[3550]: time="2025-11-24T13:24:32.435888265Z" level=info msg="Removing pod sandbox: 1df06ae9cd25a6678e8cd4d2c00e2f6bbe06ae1232090f20e477db2c4564dc60" id=36cae95d-7f6c-4358-8bcb-a0fc371265fc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 13:24:32 functional-471703 crio[3550]: time="2025-11-24T13:24:32.43951332Z" level=info msg="Removed pod sandbox: 1df06ae9cd25a6678e8cd4d2c00e2f6bbe06ae1232090f20e477db2c4564dc60" id=36cae95d-7f6c-4358-8bcb-a0fc371265fc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 13:24:37 functional-471703 crio[3550]: time="2025-11-24T13:24:37.465129678Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2f48f327-e69c-4607-b249-649d02f04868 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:24:37 functional-471703 crio[3550]: time="2025-11-24T13:24:37.550882889Z" level=info msg="Running pod sandbox: default/hello-node-75c85bcc94-x49bt/POD" id=9e14dacf-d2cf-414d-8423-507e1577d53d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:24:37 functional-471703 crio[3550]: time="2025-11-24T13:24:37.550944707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:24:37 functional-471703 crio[3550]: time="2025-11-24T13:24:37.556273939Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-x49bt Namespace:default ID:0b0d6929ad56395af153d1fcbfe2a643a2159110ee2471f9505f79d68b48804f UID:ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd NetNS:/var/run/netns/8c51d14a-7ca6-4863-9c8d-57a85637dcc6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c960}] Aliases:map[]}"
	Nov 24 13:24:37 functional-471703 crio[3550]: time="2025-11-24T13:24:37.556310994Z" level=info msg="Adding pod default_hello-node-75c85bcc94-x49bt to CNI network \"kindnet\" (type=ptp)"
	Nov 24 13:24:37 functional-471703 crio[3550]: time="2025-11-24T13:24:37.567974726Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-x49bt Namespace:default ID:0b0d6929ad56395af153d1fcbfe2a643a2159110ee2471f9505f79d68b48804f UID:ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd NetNS:/var/run/netns/8c51d14a-7ca6-4863-9c8d-57a85637dcc6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c960}] Aliases:map[]}"
	Nov 24 13:24:37 functional-471703 crio[3550]: time="2025-11-24T13:24:37.568131954Z" level=info msg="Checking pod default_hello-node-75c85bcc94-x49bt for CNI network kindnet (type=ptp)"
	Nov 24 13:24:37 functional-471703 crio[3550]: time="2025-11-24T13:24:37.571601947Z" level=info msg="Ran pod sandbox 0b0d6929ad56395af153d1fcbfe2a643a2159110ee2471f9505f79d68b48804f with infra container: default/hello-node-75c85bcc94-x49bt/POD" id=9e14dacf-d2cf-414d-8423-507e1577d53d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:24:37 functional-471703 crio[3550]: time="2025-11-24T13:24:37.574544181Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9f4f06f2-d579-42f0-9148-bf09e22ecb43 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:24:52 functional-471703 crio[3550]: time="2025-11-24T13:24:52.464649485Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8a5e5eb9-bd69-40ca-9c10-7532db60caaa name=/runtime.v1.ImageService/PullImage
	Nov 24 13:25:02 functional-471703 crio[3550]: time="2025-11-24T13:25:02.464054271Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=219f81cb-a8df-4b98-af1d-6132d1fdf7ac name=/runtime.v1.ImageService/PullImage
	Nov 24 13:25:14 functional-471703 crio[3550]: time="2025-11-24T13:25:14.463201901Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d259f0ca-0ef7-4803-a8c7-06b966d7e310 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:25:47 functional-471703 crio[3550]: time="2025-11-24T13:25:47.462820637Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a0572c7a-e70f-43d9-bfca-d9194866e6a3 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:25:55 functional-471703 crio[3550]: time="2025-11-24T13:25:55.463478169Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=66a5f557-9e2a-41d6-994e-e9ccbe981f85 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:27:14 functional-471703 crio[3550]: time="2025-11-24T13:27:14.462491279Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=38b9423b-24ae-4e54-873d-3e64d0f65b93 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:27:18 functional-471703 crio[3550]: time="2025-11-24T13:27:18.463007521Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=451c1029-3568-4fa9-9e37-c8c306584c83 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:30:03 functional-471703 crio[3550]: time="2025-11-24T13:30:03.462618613Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7749dac6-c165-45b0-808c-1548be15b7b4 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:30:09 functional-471703 crio[3550]: time="2025-11-24T13:30:09.462885872Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=794daec3-41f3-4c87-abac-bab3c8bf6bb0 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b243143b743ad       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712       9 minutes ago       Running             myfrontend                0                   882b9e61c5141       sp-pod                                      default
	628e626ed9d04       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   10 minutes ago      Exited              mount-munger              0                   012f0bb69b9ce       busybox-mount                               default
	9eb08a5b1fcb1       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90       10 minutes ago      Running             nginx                     0                   aeb80549efe13       nginx-svc                                   default
	8bb2c1b979e7e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      10 minutes ago      Running             coredns                   2                   f832255aa5c04       coredns-66bc5c9577-thd8f                    kube-system
	c4bbc54d94778       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      10 minutes ago      Running             kindnet-cni               2                   9c2ed4cc5c298       kindnet-jbj2n                               kube-system
	cd5e194d69eef       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      10 minutes ago      Running             storage-provisioner       2                   8cae7106aeef2       storage-provisioner                         kube-system
	0568af03d635e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      10 minutes ago      Running             kube-proxy                2                   342e186d71161       kube-proxy-2klr6                            kube-system
	51d75dbf30e47       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      10 minutes ago      Running             kube-apiserver            0                   1a468cc3b6ac5       kube-apiserver-functional-471703            kube-system
	5f74ab8266a1b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      10 minutes ago      Running             kube-controller-manager   2                   dcf5cbf8ced08       kube-controller-manager-functional-471703   kube-system
	d631a040e85a1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      10 minutes ago      Running             kube-scheduler            2                   0ecaa6478bd16       kube-scheduler-functional-471703            kube-system
	d3059953fa7d9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      10 minutes ago      Running             etcd                      2                   5c4d74798d8b7       etcd-functional-471703                      kube-system
	a69058794a0e4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      11 minutes ago      Exited              kube-scheduler            1                   0ecaa6478bd16       kube-scheduler-functional-471703            kube-system
	68a3a2d8d777c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      11 minutes ago      Exited              storage-provisioner       1                   8cae7106aeef2       storage-provisioner                         kube-system
	444d1ac290bc5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      11 minutes ago      Exited              coredns                   1                   f832255aa5c04       coredns-66bc5c9577-thd8f                    kube-system
	98cf5592f2120       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      11 minutes ago      Exited              kindnet-cni               1                   9c2ed4cc5c298       kindnet-jbj2n                               kube-system
	52cb28626ef4c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      11 minutes ago      Exited              kube-proxy                1                   342e186d71161       kube-proxy-2klr6                            kube-system
	31bd0f2e8fa40       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      11 minutes ago      Exited              etcd                      1                   5c4d74798d8b7       etcd-functional-471703                      kube-system
	c7d48b502ee29       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      11 minutes ago      Exited              kube-controller-manager   1                   dcf5cbf8ced08       kube-controller-manager-functional-471703   kube-system
	
	
	==> coredns [444d1ac290bc5d76ddf4282411f1c346b7d64bbc9da03b44be199f2860bf84d2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49001 - 4609 "HINFO IN 3671122110780171095.7599474649613660664. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010949011s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8bb2c1b979e7e02c86ef16e0501f037e49239540d4f3b358718826a16ce3dd67] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36679 - 39336 "HINFO IN 6073573874663699635.8486785256422137844. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02316227s
	
	
	==> describe nodes <==
	Name:               functional-471703
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-471703
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=functional-471703
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_21_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:21:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-471703
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:34:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:34:02 +0000   Mon, 24 Nov 2025 13:21:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:34:02 +0000   Mon, 24 Nov 2025 13:21:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:34:02 +0000   Mon, 24 Nov 2025 13:21:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:34:02 +0000   Mon, 24 Nov 2025 13:22:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-471703
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                2be102a3-c498-4a96-b6e0-aa02ec8115ab
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-x49bt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  default                     hello-node-connect-7d85dfc575-l5lkz          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 coredns-66bc5c9577-thd8f                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-471703                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-jbj2n                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-471703             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-471703    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-2klr6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-471703             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-471703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-471703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-471703 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-471703 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-471703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-471703 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-471703 event: Registered Node functional-471703 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-471703 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-471703 event: Registered Node functional-471703 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-471703 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-471703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-471703 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-471703 event: Registered Node functional-471703 in Controller
	
	
	==> dmesg <==
	[Nov24 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015884] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504458] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033874] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.788873] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.144374] kauditd_printk_skb: 36 callbacks suppressed
	[Nov24 13:13] kauditd_printk_skb: 5 callbacks suppressed
	[Nov24 13:15] overlayfs: idmapped layers are currently not supported
	[  +0.074288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov24 13:20] overlayfs: idmapped layers are currently not supported
	[Nov24 13:21] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [31bd0f2e8fa40bab7da7de6709c33e34794bb10006d371481970728fafe50345] <==
	{"level":"warn","ts":"2025-11-24T13:22:49.741972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:22:49.755774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:22:49.780716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:22:49.813118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:22:49.832194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:22:49.847900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:22:49.905850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49216","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:23:15.130143Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T13:23:15.130188Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-471703","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T13:23:15.130313Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T13:23:15.148205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-11-24T13:23:15.283755Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T13:23:15.283920Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T13:23:15.283966Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T13:23:15.283756Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T13:23:15.284065Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T13:23:15.284120Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-24T13:23:15.284206Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T13:23:15.284269Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-24T13:23:15.284334Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T13:23:15.284402Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T13:23:15.288408Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T13:23:15.288501Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T13:23:15.288542Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T13:23:15.288549Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-471703","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [d3059953fa7d936fd7f036a17ce8f2e81efa04b431fb712ae1ebd38968825b3d] <==
	{"level":"warn","ts":"2025-11-24T13:23:36.216904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.228042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.244144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.291074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.299955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.316892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.337462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.355901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.377833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.397609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.406915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.446827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.452930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.477494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.494445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.509154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.527422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.575979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.595952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.627956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.659906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:23:36.831979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45724","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:33:34.938297Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1146}
	{"level":"info","ts":"2025-11-24T13:33:34.962650Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1146,"took":"23.754761ms","hash":5901306,"current-db-size-bytes":3338240,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1503232,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-24T13:33:34.962705Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":5901306,"revision":1146,"compact-revision":-1}
	
	
	==> kernel <==
	 13:34:26 up  1:16,  0 user,  load average: 0.03, 0.36, 0.56
	Linux functional-471703 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [98cf5592f21207f801aac9aab46072377e399092db918d85a163bcc8f6495887] <==
	I1124 13:22:47.069409       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:22:47.126712       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 13:22:47.126852       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:22:47.126864       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:22:47.126879       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:22:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:22:47.257866       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:22:47.257942       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:22:47.257976       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:22:47.268039       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:22:50.824144       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:22:50.824172       1 metrics.go:72] Registering metrics
	I1124 13:22:50.824227       1 controller.go:711] "Syncing nftables rules"
	I1124 13:22:57.257844       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:22:57.257903       1 main.go:301] handling current node
	I1124 13:23:07.257464       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:23:07.257506       1 main.go:301] handling current node
	
	
	==> kindnet [c4bbc54d94778b6d8e83f4cfc5fef5b50106336e7d7bfa39a8c7f26fbd7ef4b6] <==
	I1124 13:32:19.216783       1 main.go:301] handling current node
	I1124 13:32:29.214823       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:32:29.214855       1 main.go:301] handling current node
	I1124 13:32:39.222409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:32:39.222506       1 main.go:301] handling current node
	I1124 13:32:49.219421       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:32:49.219454       1 main.go:301] handling current node
	I1124 13:32:59.223450       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:32:59.223559       1 main.go:301] handling current node
	I1124 13:33:09.216717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:33:09.216752       1 main.go:301] handling current node
	I1124 13:33:19.219482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:33:19.219518       1 main.go:301] handling current node
	I1124 13:33:29.219464       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:33:29.219803       1 main.go:301] handling current node
	I1124 13:33:39.216775       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:33:39.216902       1 main.go:301] handling current node
	I1124 13:33:49.222521       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:33:49.222557       1 main.go:301] handling current node
	I1124 13:33:59.221270       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:33:59.221308       1 main.go:301] handling current node
	I1124 13:34:09.214841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:34:09.214887       1 main.go:301] handling current node
	I1124 13:34:19.217904       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:34:19.217939       1 main.go:301] handling current node
	
	
	==> kube-apiserver [51d75dbf30e4763facf5e68f56e081b60fe442d098603aa992a64baa9dc993b8] <==
	I1124 13:23:37.897675       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:23:37.917022       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 13:23:37.917120       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 13:23:37.917202       1 aggregator.go:171] initial CRD sync complete...
	I1124 13:23:37.917345       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 13:23:37.917381       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 13:23:37.917426       1 cache.go:39] Caches are synced for autoregister controller
	I1124 13:23:37.917278       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:23:37.932177       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:23:38.526448       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:23:38.608939       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:23:40.071322       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:23:40.192448       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:23:40.260413       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:23:40.270427       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:23:41.198090       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:23:41.484317       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 13:23:41.582207       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:23:55.572964       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.104.41"}
	I1124 13:24:01.128165       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.164.26"}
	I1124 13:24:24.085007       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.195.205"}
	E1124 13:24:30.838928       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1124 13:24:37.134185       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:34442: use of closed network connection
	I1124 13:24:37.347657       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.179.13"}
	I1124 13:33:37.832464       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5f74ab8266a1b587f5d06f65fdf21b22549ad5d438d7545f855e51226407aeaa] <==
	I1124 13:23:41.203449       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:23:41.203539       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 13:23:41.203633       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 13:23:41.203729       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-471703"
	I1124 13:23:41.203806       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 13:23:41.208615       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 13:23:41.211089       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:23:41.212260       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 13:23:41.218505       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 13:23:41.224606       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 13:23:41.225842       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:23:41.225900       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 13:23:41.225939       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 13:23:41.225972       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:23:41.227323       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 13:23:41.227464       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 13:23:41.227672       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:23:41.229704       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 13:23:41.231497       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 13:23:41.236788       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 13:23:41.248078       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:23:41.248161       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 13:23:41.248192       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 13:23:41.248472       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:23:41.253393       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-controller-manager [c7d48b502ee2949783d0790eae05582b7e705ca4b38e3d9d697c46f7934e764a] <==
	I1124 13:22:54.065902       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 13:22:54.069935       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 13:22:54.070142       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 13:22:54.065919       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 13:22:54.074056       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:22:54.074867       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:22:54.076012       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 13:22:54.083254       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 13:22:54.083435       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 13:22:54.089607       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:22:54.089707       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 13:22:54.089741       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 13:22:54.093817       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:22:54.105568       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:22:54.110680       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 13:22:54.110871       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 13:22:54.110997       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-471703"
	I1124 13:22:54.111085       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 13:22:54.114398       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:22:54.114580       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 13:22:54.114760       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:22:54.114506       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 13:22:54.114848       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 13:22:54.115011       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 13:22:54.124812       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [0568af03d635ebe0930edf01f6cb586b5d9ba7d70b1b5f24584cdfffb57dcb97] <==
	I1124 13:23:39.273880       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:23:39.383211       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:23:39.483636       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:23:39.483707       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 13:23:39.483828       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:23:39.508921       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:23:39.508995       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:23:39.513589       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:23:39.514181       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:23:39.514210       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:23:39.516279       1 config.go:200] "Starting service config controller"
	I1124 13:23:39.516299       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:23:39.516348       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:23:39.516355       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:23:39.516367       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:23:39.516371       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:23:39.518692       1 config.go:309] "Starting node config controller"
	I1124 13:23:39.518713       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:23:39.518721       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:23:39.616794       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:23:39.616818       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:23:39.616830       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [52cb28626ef4c494529889ecdd128a18b2efe73d12cb776090a784b9bfa9c784] <==
	I1124 13:22:47.218327       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:22:48.912645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:22:50.771911       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:22:50.772025       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 13:22:50.772138       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:22:51.031048       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:22:51.031201       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:22:51.038297       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:22:51.038677       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:22:51.038893       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:22:51.040358       1 config.go:200] "Starting service config controller"
	I1124 13:22:51.040427       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:22:51.040470       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:22:51.040512       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:22:51.040549       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:22:51.040575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:22:51.041282       1 config.go:309] "Starting node config controller"
	I1124 13:22:51.048692       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:22:51.048793       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:22:51.140971       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:22:51.141091       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:22:51.141106       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a69058794a0e4ff800e63f86ff8d9884c8f482f5ed68f560c71d0670674ed3e3] <==
	I1124 13:22:49.339235       1 serving.go:386] Generated self-signed cert in-memory
	I1124 13:22:50.893400       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 13:22:50.893429       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:22:50.909051       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 13:22:50.909160       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 13:22:50.909177       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 13:22:50.909199       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 13:22:50.931963       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:22:50.934398       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:22:50.934463       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 13:22:50.934483       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 13:22:51.012328       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 13:22:51.034621       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:22:51.034573       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 13:23:15.137581       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 13:23:15.137608       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 13:23:15.137628       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 13:23:15.137653       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 13:23:15.137673       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:23:15.137690       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1124 13:23:15.137893       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 13:23:15.137917       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d631a040e85a1b144b0c276c21afa699be5c3f1cd198a08e866cc0d5d80089cf] <==
	I1124 13:23:35.810469       1 serving.go:386] Generated self-signed cert in-memory
	I1124 13:23:39.432177       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 13:23:39.432210       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:23:39.437853       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 13:23:39.437959       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 13:23:39.437986       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 13:23:39.438016       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 13:23:39.440342       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:23:39.440372       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:23:39.440390       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 13:23:39.440397       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 13:23:39.539046       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 13:23:39.541313       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 13:23:39.541405       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:31:46 functional-471703 kubelet[3868]: E1124 13:31:46.462398    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:31:47 functional-471703 kubelet[3868]: E1124 13:31:47.462195    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:32:00 functional-471703 kubelet[3868]: E1124 13:32:00.462332    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:32:01 functional-471703 kubelet[3868]: E1124 13:32:01.462051    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:32:11 functional-471703 kubelet[3868]: E1124 13:32:11.461952    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:32:13 functional-471703 kubelet[3868]: E1124 13:32:13.461678    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:32:24 functional-471703 kubelet[3868]: E1124 13:32:24.462424    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:32:27 functional-471703 kubelet[3868]: E1124 13:32:27.462650    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:32:36 functional-471703 kubelet[3868]: E1124 13:32:36.462907    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:32:38 functional-471703 kubelet[3868]: E1124 13:32:38.464004    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:32:49 functional-471703 kubelet[3868]: E1124 13:32:49.462277    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:32:49 functional-471703 kubelet[3868]: E1124 13:32:49.462849    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:33:02 functional-471703 kubelet[3868]: E1124 13:33:02.463962    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:33:02 functional-471703 kubelet[3868]: E1124 13:33:02.464012    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:33:16 functional-471703 kubelet[3868]: E1124 13:33:16.462460    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:33:17 functional-471703 kubelet[3868]: E1124 13:33:17.462031    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:33:31 functional-471703 kubelet[3868]: E1124 13:33:31.461698    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:33:32 functional-471703 kubelet[3868]: E1124 13:33:32.463021    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:33:45 functional-471703 kubelet[3868]: E1124 13:33:45.461729    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:33:45 functional-471703 kubelet[3868]: E1124 13:33:45.461937    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:33:56 functional-471703 kubelet[3868]: E1124 13:33:56.462942    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:33:59 functional-471703 kubelet[3868]: E1124 13:33:59.461883    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:34:11 functional-471703 kubelet[3868]: E1124 13:34:11.462269    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	Nov 24 13:34:13 functional-471703 kubelet[3868]: E1124 13:34:13.462161    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-x49bt" podUID="ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd"
	Nov 24 13:34:24 functional-471703 kubelet[3868]: E1124 13:34:24.462168    3868 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l5lkz" podUID="38f1a276-9880-4d11-817c-08f82bab9329"
	
	
	==> storage-provisioner [68a3a2d8d777cf152b00ac6aa8b413eb8c1f7f11e28bb5c2304dcacb254eae67] <==
	I1124 13:22:47.228089       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 13:22:50.762541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 13:22:50.764049       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 13:22:50.778178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:22:54.270572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:22:58.530747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:23:02.128920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:23:05.190084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:23:08.212738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:23:08.217880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:23:08.218036       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:23:08.218205       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-471703_29094387-6ce5-4d87-8749-6c07ff62eed4!
	I1124 13:23:08.218420       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0381e57f-cd43-4a71-b23b-0cea22d2a441", APIVersion:"v1", ResourceVersion:"569", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-471703_29094387-6ce5-4d87-8749-6c07ff62eed4 became leader
	W1124 13:23:08.221832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:23:08.233074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:23:08.319180       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-471703_29094387-6ce5-4d87-8749-6c07ff62eed4!
	W1124 13:23:10.236502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:23:10.241504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:23:12.245342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:23:12.249812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:23:14.252599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:23:14.260571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cd5e194d69eefb8dc019f8a5e2191c7a75d17f50476faf4bb31792536c43487a] <==
	W1124 13:34:01.514683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:03.518510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:03.524908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:05.527954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:05.532172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:07.535496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:07.540239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:09.543730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:09.550678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:11.554355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:11.558892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:13.561777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:13.568424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:15.571415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:15.575643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:17.579182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:17.583612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:19.586639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:19.590924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:21.593922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:21.600268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:23.604618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:23.609296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:25.612581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:34:25.618656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-471703 -n functional-471703
helpers_test.go:269: (dbg) Run:  kubectl --context functional-471703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-x49bt hello-node-connect-7d85dfc575-l5lkz
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-471703 describe pod busybox-mount hello-node-75c85bcc94-x49bt hello-node-connect-7d85dfc575-l5lkz
helpers_test.go:290: (dbg) kubectl --context functional-471703 describe pod busybox-mount hello-node-75c85bcc94-x49bt hello-node-connect-7d85dfc575-l5lkz:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-471703/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 13:24:13 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://628e626ed9d04e0c7539ebe8d0395996a8961db6557f104a6d893ed4b63ebf62
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 13:24:15 +0000
	      Finished:     Mon, 24 Nov 2025 13:24:15 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k99tr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-k99tr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-471703
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.939s (1.939s including waiting). Image size: 3774172 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-x49bt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-471703/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 13:24:37 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v75n7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-v75n7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m50s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-x49bt to functional-471703
	  Normal   Pulling    7m9s (x5 over 9m50s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 9m50s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m9s (x5 over 9m50s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m44s (x20 over 9m50s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m30s (x21 over 9m50s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-l5lkz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-471703/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 13:24:24 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t94pb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t94pb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-l5lkz to functional-471703
	  Normal   Pulling    7m13s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m13s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m13s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x43 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3s (x43 over 10m)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image load --daemon kicbase/echo-server:functional-471703 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-471703" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image load --daemon kicbase/echo-server:functional-471703 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-471703" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-471703
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image load --daemon kicbase/echo-server:functional-471703 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-471703" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image save kicbase/echo-server:functional-471703 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1124 13:24:07.625232   27447 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:24:07.625489   27447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:24:07.625522   27447 out.go:374] Setting ErrFile to fd 2...
	I1124 13:24:07.625543   27447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:24:07.625823   27447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:24:07.626480   27447 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:24:07.626641   27447 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:24:07.627184   27447 cli_runner.go:164] Run: docker container inspect functional-471703 --format={{.State.Status}}
	I1124 13:24:07.648605   27447 ssh_runner.go:195] Run: systemctl --version
	I1124 13:24:07.648659   27447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
	I1124 13:24:07.665373   27447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
	I1124 13:24:07.777958   27447 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1124 13:24:07.778029   27447 cache_images.go:255] Failed to load cached images for "functional-471703": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1124 13:24:07.778055   27447 cache_images.go:267] failed pushing to: functional-471703

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-471703
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image save --daemon kicbase/echo-server:functional-471703 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-471703
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-471703: exit status 1 (21.787088ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-471703

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-471703

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-471703 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-471703 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-x49bt" [ddcb7a87-fe0b-4dd0-8df5-2400573ea1bd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1124 13:24:49.756997    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:05.886870    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:33.598996    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:32:05.887400    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-471703 -n functional-471703
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-24 13:34:37.79860931 +0000 UTC m=+1241.111830637
functional_test.go:1460: (dbg) Run:  kubectl --context functional-471703 describe po hello-node-75c85bcc94-x49bt -n default
functional_test.go:1460: (dbg) kubectl --context functional-471703 describe po hello-node-75c85bcc94-x49bt -n default:
Name:             hello-node-75c85bcc94-x49bt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-471703/192.168.49.2
Start Time:       Mon, 24 Nov 2025 13:24:37 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v75n7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-v75n7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-x49bt to functional-471703
Normal   Pulling    7m19s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m19s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m19s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-471703 logs hello-node-75c85bcc94-x49bt -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-471703 logs hello-node-75c85bcc94-x49bt -n default: exit status 1 (82.285461ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-x49bt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-471703 logs hello-node-75c85bcc94-x49bt -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 service --namespace=default --https --url hello-node: exit status 115 (429.91661ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31162
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-471703 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 service hello-node --url --format={{.IP}}: exit status 115 (504.169734ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-471703 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 service hello-node --url: exit status 115 (516.907523ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31162
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-471703 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31162
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-921919 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-921919 --output=json --user=testUser: exit status 80 (1.720596283s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"20af0594-c5a7-4a83-8edf-c8015cb98f7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-921919 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a3890826-50de-47de-bb79-291611d7cc30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T13:47:54Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"0579e06a-9df9-43bd-949c-9dcc7fb92054","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-921919 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.93s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-921919 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-921919 --output=json --user=testUser: exit status 80 (1.929483729s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2a7ba040-7db5-4178-82ea-2bc30d86e131","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-921919 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"73b04c53-1bb4-46d4-9a94-7e6f6515eff1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T13:47:56Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"713b9cf4-3169-495f-b0e1-e2958c142f73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-921919 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.93s)

                                                
                                    
x
+
TestPause/serial/Pause (7.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-007087 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-007087 --alsologtostderr -v=5: exit status 80 (2.131920466s)

                                                
                                                
-- stdout --
	* Pausing node pause-007087 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:12:43.474892  168833 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:12:43.475456  168833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:12:43.475494  168833 out.go:374] Setting ErrFile to fd 2...
	I1124 14:12:43.475513  168833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:12:43.475817  168833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:12:43.476120  168833 out.go:368] Setting JSON to false
	I1124 14:12:43.476170  168833 mustload.go:66] Loading cluster: pause-007087
	I1124 14:12:43.476684  168833 config.go:182] Loaded profile config "pause-007087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:12:43.477187  168833 cli_runner.go:164] Run: docker container inspect pause-007087 --format={{.State.Status}}
	I1124 14:12:43.503119  168833 host.go:66] Checking if "pause-007087" exists ...
	I1124 14:12:43.503474  168833 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:12:43.614787  168833 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-24 14:12:43.591532746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:12:43.615421  168833 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-007087 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 14:12:43.622564  168833 out.go:179] * Pausing node pause-007087 ... 
	I1124 14:12:43.632862  168833 host.go:66] Checking if "pause-007087" exists ...
	I1124 14:12:43.633210  168833 ssh_runner.go:195] Run: systemctl --version
	I1124 14:12:43.633250  168833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-007087
	I1124 14:12:43.660885  168833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/pause-007087/id_rsa Username:docker}
	I1124 14:12:43.772351  168833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:12:43.785932  168833 pause.go:52] kubelet running: true
	I1124 14:12:43.786013  168833 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:12:44.066825  168833 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:12:44.066925  168833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:12:44.150369  168833 cri.go:89] found id: "2e7813c68b435644db6b5b631c6a51381cbf7671e158fa0cff68b4d62ce0855c"
	I1124 14:12:44.150440  168833 cri.go:89] found id: "5fcf211b67d41ed778023e86be7ab49fa92d6c1cc001a3c1bec7b4b36589f5ab"
	I1124 14:12:44.150460  168833 cri.go:89] found id: "4026e5c2f2cf1ab6bf194371e28d50a39a1daa43fcf0ab9101aa80b4a440331f"
	I1124 14:12:44.150482  168833 cri.go:89] found id: "d5ce86abc9d8a0d28e82df1fd675752db7a03c95cf7f8bd47507d4329d359af5"
	I1124 14:12:44.150518  168833 cri.go:89] found id: "76c82c34484bb53fc1c279f6d1eb1687216631d32b8457d880b2c185a6175dbd"
	I1124 14:12:44.150541  168833 cri.go:89] found id: "7bec6bf85a9734e71b9d06193b900b58d7d8467d320c984cf1a526c52bcc77c3"
	I1124 14:12:44.150563  168833 cri.go:89] found id: "8b84bd360beaa1b3826a8723ebe0bc93f98205c8ee561e2303b13f4dc0f5a7c9"
	I1124 14:12:44.150599  168833 cri.go:89] found id: "2ea4162d56185e1c61afd3d7a663045d3c50c97f7dd7ccda92ea411c09006854"
	I1124 14:12:44.150620  168833 cri.go:89] found id: "190f787cc9b57074c59aeec3510206ebd7988052da942a584ab393775b5ab95e"
	I1124 14:12:44.150644  168833 cri.go:89] found id: "f29d9247e0cecb84740fabf059644f3deb00691f26554c62425089202ee4784a"
	I1124 14:12:44.150665  168833 cri.go:89] found id: "957acc83a13c0ca073c6f33777b9bd8699a34d773dde4dfad58149f8ebc53ada"
	I1124 14:12:44.150702  168833 cri.go:89] found id: "f69c60036ed0a1883223128524650e3b4bafdba6ddee0a04f7b68d1b82f62adb"
	I1124 14:12:44.150720  168833 cri.go:89] found id: "5f04b8e7804bd26b2d04ccb2b3138341d474bf2459719c4eddb573df39dbb7c5"
	I1124 14:12:44.150741  168833 cri.go:89] found id: "dc37c94894a8d5ee7bf3e1b55fffde9e281f4c1dda3b940aca34d6c4af33d588"
	I1124 14:12:44.150776  168833 cri.go:89] found id: ""
	I1124 14:12:44.150873  168833 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:12:44.165880  168833 retry.go:31] will retry after 278.934832ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:12:44Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:12:44.445581  168833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:12:44.461779  168833 pause.go:52] kubelet running: false
	I1124 14:12:44.461850  168833 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:12:44.612876  168833 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:12:44.612973  168833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:12:44.696206  168833 cri.go:89] found id: "2e7813c68b435644db6b5b631c6a51381cbf7671e158fa0cff68b4d62ce0855c"
	I1124 14:12:44.696232  168833 cri.go:89] found id: "5fcf211b67d41ed778023e86be7ab49fa92d6c1cc001a3c1bec7b4b36589f5ab"
	I1124 14:12:44.696237  168833 cri.go:89] found id: "4026e5c2f2cf1ab6bf194371e28d50a39a1daa43fcf0ab9101aa80b4a440331f"
	I1124 14:12:44.696242  168833 cri.go:89] found id: "d5ce86abc9d8a0d28e82df1fd675752db7a03c95cf7f8bd47507d4329d359af5"
	I1124 14:12:44.696245  168833 cri.go:89] found id: "76c82c34484bb53fc1c279f6d1eb1687216631d32b8457d880b2c185a6175dbd"
	I1124 14:12:44.696249  168833 cri.go:89] found id: "7bec6bf85a9734e71b9d06193b900b58d7d8467d320c984cf1a526c52bcc77c3"
	I1124 14:12:44.696253  168833 cri.go:89] found id: "8b84bd360beaa1b3826a8723ebe0bc93f98205c8ee561e2303b13f4dc0f5a7c9"
	I1124 14:12:44.696256  168833 cri.go:89] found id: "2ea4162d56185e1c61afd3d7a663045d3c50c97f7dd7ccda92ea411c09006854"
	I1124 14:12:44.696279  168833 cri.go:89] found id: "190f787cc9b57074c59aeec3510206ebd7988052da942a584ab393775b5ab95e"
	I1124 14:12:44.696293  168833 cri.go:89] found id: "f29d9247e0cecb84740fabf059644f3deb00691f26554c62425089202ee4784a"
	I1124 14:12:44.696303  168833 cri.go:89] found id: "957acc83a13c0ca073c6f33777b9bd8699a34d773dde4dfad58149f8ebc53ada"
	I1124 14:12:44.696310  168833 cri.go:89] found id: "f69c60036ed0a1883223128524650e3b4bafdba6ddee0a04f7b68d1b82f62adb"
	I1124 14:12:44.696314  168833 cri.go:89] found id: "5f04b8e7804bd26b2d04ccb2b3138341d474bf2459719c4eddb573df39dbb7c5"
	I1124 14:12:44.696317  168833 cri.go:89] found id: "dc37c94894a8d5ee7bf3e1b55fffde9e281f4c1dda3b940aca34d6c4af33d588"
	I1124 14:12:44.696319  168833 cri.go:89] found id: ""
	I1124 14:12:44.696386  168833 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:12:44.708498  168833 retry.go:31] will retry after 445.170865ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:12:44Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:12:45.155880  168833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:12:45.174221  168833 pause.go:52] kubelet running: false
	I1124 14:12:45.174385  168833 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:12:45.375439  168833 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:12:45.375551  168833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:12:45.446898  168833 cri.go:89] found id: "2e7813c68b435644db6b5b631c6a51381cbf7671e158fa0cff68b4d62ce0855c"
	I1124 14:12:45.446973  168833 cri.go:89] found id: "5fcf211b67d41ed778023e86be7ab49fa92d6c1cc001a3c1bec7b4b36589f5ab"
	I1124 14:12:45.446993  168833 cri.go:89] found id: "4026e5c2f2cf1ab6bf194371e28d50a39a1daa43fcf0ab9101aa80b4a440331f"
	I1124 14:12:45.447021  168833 cri.go:89] found id: "d5ce86abc9d8a0d28e82df1fd675752db7a03c95cf7f8bd47507d4329d359af5"
	I1124 14:12:45.447045  168833 cri.go:89] found id: "76c82c34484bb53fc1c279f6d1eb1687216631d32b8457d880b2c185a6175dbd"
	I1124 14:12:45.447067  168833 cri.go:89] found id: "7bec6bf85a9734e71b9d06193b900b58d7d8467d320c984cf1a526c52bcc77c3"
	I1124 14:12:45.447086  168833 cri.go:89] found id: "8b84bd360beaa1b3826a8723ebe0bc93f98205c8ee561e2303b13f4dc0f5a7c9"
	I1124 14:12:45.447113  168833 cri.go:89] found id: "2ea4162d56185e1c61afd3d7a663045d3c50c97f7dd7ccda92ea411c09006854"
	I1124 14:12:45.447135  168833 cri.go:89] found id: "190f787cc9b57074c59aeec3510206ebd7988052da942a584ab393775b5ab95e"
	I1124 14:12:45.447161  168833 cri.go:89] found id: "f29d9247e0cecb84740fabf059644f3deb00691f26554c62425089202ee4784a"
	I1124 14:12:45.447181  168833 cri.go:89] found id: "957acc83a13c0ca073c6f33777b9bd8699a34d773dde4dfad58149f8ebc53ada"
	I1124 14:12:45.447202  168833 cri.go:89] found id: "f69c60036ed0a1883223128524650e3b4bafdba6ddee0a04f7b68d1b82f62adb"
	I1124 14:12:45.447222  168833 cri.go:89] found id: "5f04b8e7804bd26b2d04ccb2b3138341d474bf2459719c4eddb573df39dbb7c5"
	I1124 14:12:45.447246  168833 cri.go:89] found id: "dc37c94894a8d5ee7bf3e1b55fffde9e281f4c1dda3b940aca34d6c4af33d588"
	I1124 14:12:45.447268  168833 cri.go:89] found id: ""
	I1124 14:12:45.447338  168833 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:12:45.484873  168833 out.go:203] 
	W1124 14:12:45.508935  168833 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:12:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:12:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 14:12:45.508966  168833 out.go:285] * 
	* 
	W1124 14:12:45.514224  168833 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 14:12:45.543812  168833 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-007087 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-007087
helpers_test.go:243: (dbg) docker inspect pause-007087:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5",
	        "Created": "2025-11-24T14:10:53.646189475Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 162187,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:10:53.718365572Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5/hosts",
	        "LogPath": "/var/lib/docker/containers/eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5/eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5-json.log",
	        "Name": "/pause-007087",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-007087:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-007087",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5",
	                "LowerDir": "/var/lib/docker/overlay2/8c4e521782a36aada5fff59d5bec29ce59a596b4f3ea1eaaf9b9b986c7ad9ed5-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8c4e521782a36aada5fff59d5bec29ce59a596b4f3ea1eaaf9b9b986c7ad9ed5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8c4e521782a36aada5fff59d5bec29ce59a596b4f3ea1eaaf9b9b986c7ad9ed5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8c4e521782a36aada5fff59d5bec29ce59a596b4f3ea1eaaf9b9b986c7ad9ed5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-007087",
	                "Source": "/var/lib/docker/volumes/pause-007087/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-007087",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-007087",
	                "name.minikube.sigs.k8s.io": "pause-007087",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09d3749193968070e3270ccf8f86c2078a55f4fc7b5c7815809878e14a1d9a76",
	            "SandboxKey": "/var/run/docker/netns/09d374919396",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-007087": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:26:f4:26:0c:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "91d34f833e5a794c1a8c34d58d3b051bc5abdc8b5bf166590ca1a2b006806749",
	                    "EndpointID": "6093e0b493baa05b823a7a1aeb73f4e2bcc516edce2d5cc92cba075b357c39e9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-007087",
	                        "eade3cbbfb41"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-007087 -n pause-007087
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-007087 -n pause-007087: exit status 2 (455.141175ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-007087 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-007087 logs -n 25: (1.858867176s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p missing-upgrade-593066                                                                                                                │ missing-upgrade-593066    │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ start   │ -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:07 UTC │
	│ stop    │ -p kubernetes-upgrade-610110                                                                                                             │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:07 UTC │ 24 Nov 25 14:07 UTC │
	│ start   │ -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:07 UTC │ 24 Nov 25 14:12 UTC │
	│ delete  │ -p NoKubernetes-637834                                                                                                                   │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:07 UTC │ 24 Nov 25 14:07 UTC │
	│ start   │ -p NoKubernetes-637834 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:07 UTC │ 24 Nov 25 14:07 UTC │
	│ ssh     │ -p NoKubernetes-637834 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:07 UTC │                     │
	│ stop    │ -p NoKubernetes-637834                                                                                                                   │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:08 UTC │ 24 Nov 25 14:08 UTC │
	│ start   │ -p NoKubernetes-637834 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:08 UTC │ 24 Nov 25 14:08 UTC │
	│ ssh     │ -p NoKubernetes-637834 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:08 UTC │                     │
	│ delete  │ -p NoKubernetes-637834                                                                                                                   │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:08 UTC │ 24 Nov 25 14:08 UTC │
	│ start   │ -p stopped-upgrade-189175 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-189175    │ jenkins │ v1.32.0 │ 24 Nov 25 14:08 UTC │ 24 Nov 25 14:09 UTC │
	│ stop    │ stopped-upgrade-189175 stop                                                                                                              │ stopped-upgrade-189175    │ jenkins │ v1.32.0 │ 24 Nov 25 14:09 UTC │ 24 Nov 25 14:09 UTC │
	│ start   │ -p stopped-upgrade-189175 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-189175    │ jenkins │ v1.37.0 │ 24 Nov 25 14:09 UTC │ 24 Nov 25 14:09 UTC │
	│ delete  │ -p stopped-upgrade-189175                                                                                                                │ stopped-upgrade-189175    │ jenkins │ v1.37.0 │ 24 Nov 25 14:09 UTC │ 24 Nov 25 14:09 UTC │
	│ start   │ -p running-upgrade-668851 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-668851    │ jenkins │ v1.32.0 │ 24 Nov 25 14:09 UTC │ 24 Nov 25 14:10 UTC │
	│ start   │ -p running-upgrade-668851 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-668851    │ jenkins │ v1.37.0 │ 24 Nov 25 14:10 UTC │ 24 Nov 25 14:10 UTC │
	│ delete  │ -p running-upgrade-668851                                                                                                                │ running-upgrade-668851    │ jenkins │ v1.37.0 │ 24 Nov 25 14:10 UTC │ 24 Nov 25 14:10 UTC │
	│ start   │ -p pause-007087 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-007087              │ jenkins │ v1.37.0 │ 24 Nov 25 14:10 UTC │ 24 Nov 25 14:12 UTC │
	│ start   │ -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │ 24 Nov 25 14:12 UTC │
	│ start   │ -p pause-007087 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-007087              │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │ 24 Nov 25 14:12 UTC │
	│ delete  │ -p kubernetes-upgrade-610110                                                                                                             │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │ 24 Nov 25 14:12 UTC │
	│ start   │ -p force-systemd-flag-928059 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-928059 │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │                     │
	│ pause   │ -p pause-007087 --alsologtostderr -v=5                                                                                                   │ pause-007087              │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:12:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:12:39.696393  168533 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:12:39.696601  168533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:12:39.696628  168533 out.go:374] Setting ErrFile to fd 2...
	I1124 14:12:39.696655  168533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:12:39.697071  168533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:12:39.697631  168533 out.go:368] Setting JSON to false
	I1124 14:12:39.698717  168533 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6911,"bootTime":1763986649,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:12:39.698857  168533 start.go:143] virtualization:  
	I1124 14:12:39.702496  168533 out.go:179] * [force-systemd-flag-928059] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:12:39.705036  168533 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:12:39.705181  168533 notify.go:221] Checking for updates...
	I1124 14:12:39.711442  168533 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:12:39.714678  168533 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:12:39.717720  168533 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:12:39.720740  168533 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:12:39.723770  168533 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:12:39.727410  168533 config.go:182] Loaded profile config "pause-007087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:12:39.727519  168533 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:12:39.756267  168533 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:12:39.756400  168533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:12:39.823132  168533 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 14:12:39.813068773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:12:39.823238  168533 docker.go:319] overlay module found
	I1124 14:12:39.826559  168533 out.go:179] * Using the docker driver based on user configuration
	I1124 14:12:39.829510  168533 start.go:309] selected driver: docker
	I1124 14:12:39.829528  168533 start.go:927] validating driver "docker" against <nil>
	I1124 14:12:39.829542  168533 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:12:39.830287  168533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:12:39.903520  168533 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 14:12:39.893214671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:12:39.903680  168533 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:12:39.903918  168533 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 14:12:39.906976  168533 out.go:179] * Using Docker driver with root privileges
	I1124 14:12:39.909861  168533 cni.go:84] Creating CNI manager for ""
	I1124 14:12:39.909934  168533 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:12:39.909948  168533 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:12:39.910105  168533 start.go:353] cluster config:
	{Name:force-systemd-flag-928059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-928059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:12:39.913253  168533 out.go:179] * Starting "force-systemd-flag-928059" primary control-plane node in "force-systemd-flag-928059" cluster
	I1124 14:12:39.916166  168533 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:12:39.919194  168533 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:12:39.922214  168533 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:12:39.922265  168533 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:12:39.922274  168533 cache.go:65] Caching tarball of preloaded images
	I1124 14:12:39.922359  168533 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:12:39.922369  168533 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:12:39.922494  168533 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/force-systemd-flag-928059/config.json ...
	I1124 14:12:39.922513  168533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/force-systemd-flag-928059/config.json: {Name:mkc6e555cb8558e370f3b1cb5d7b1c239d0b3402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:12:39.922670  168533 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:12:39.943196  168533 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:12:39.943217  168533 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:12:39.943233  168533 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:12:39.943263  168533 start.go:360] acquireMachinesLock for force-systemd-flag-928059: {Name:mk69c9a6f0f0c69847d09d52b02915ec79b546e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:12:39.943417  168533 start.go:364] duration metric: took 138.094µs to acquireMachinesLock for "force-systemd-flag-928059"
	I1124 14:12:39.943451  168533 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-928059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-928059 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:12:39.943525  168533 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:12:38.084748  166961 pod_ready.go:94] pod "etcd-pause-007087" is "Ready"
	I1124 14:12:38.084788  166961 pod_ready.go:86] duration metric: took 2.008454436s for pod "etcd-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:38.088097  166961 pod_ready.go:83] waiting for pod "kube-apiserver-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 14:12:40.095777  166961 pod_ready.go:104] pod "kube-apiserver-pause-007087" is not "Ready", error: <nil>
	W1124 14:12:42.113924  166961 pod_ready.go:104] pod "kube-apiserver-pause-007087" is not "Ready", error: <nil>
	I1124 14:12:43.099269  166961 pod_ready.go:94] pod "kube-apiserver-pause-007087" is "Ready"
	I1124 14:12:43.099291  166961 pod_ready.go:86] duration metric: took 5.011163612s for pod "kube-apiserver-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.102691  166961 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.108688  166961 pod_ready.go:94] pod "kube-controller-manager-pause-007087" is "Ready"
	I1124 14:12:43.108715  166961 pod_ready.go:86] duration metric: took 5.997701ms for pod "kube-controller-manager-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.111744  166961 pod_ready.go:83] waiting for pod "kube-proxy-rdjsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.117305  166961 pod_ready.go:94] pod "kube-proxy-rdjsw" is "Ready"
	I1124 14:12:43.117334  166961 pod_ready.go:86] duration metric: took 5.563809ms for pod "kube-proxy-rdjsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.119780  166961 pod_ready.go:83] waiting for pod "kube-scheduler-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.292381  166961 pod_ready.go:94] pod "kube-scheduler-pause-007087" is "Ready"
	I1124 14:12:43.292408  166961 pod_ready.go:86] duration metric: took 172.60527ms for pod "kube-scheduler-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.292420  166961 pod_ready.go:40] duration metric: took 10.230465001s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:12:43.369827  166961 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:12:43.378252  166961 out.go:179] * Done! kubectl is now configured to use "pause-007087" cluster and "default" namespace by default
	I1124 14:12:39.946988  168533 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:12:39.947220  168533 start.go:159] libmachine.API.Create for "force-systemd-flag-928059" (driver="docker")
	I1124 14:12:39.947254  168533 client.go:173] LocalClient.Create starting
	I1124 14:12:39.947393  168533 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem
	I1124 14:12:39.947433  168533 main.go:143] libmachine: Decoding PEM data...
	I1124 14:12:39.947459  168533 main.go:143] libmachine: Parsing certificate...
	I1124 14:12:39.947518  168533 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem
	I1124 14:12:39.947544  168533 main.go:143] libmachine: Decoding PEM data...
	I1124 14:12:39.947560  168533 main.go:143] libmachine: Parsing certificate...
	I1124 14:12:39.947940  168533 cli_runner.go:164] Run: docker network inspect force-systemd-flag-928059 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:12:39.963776  168533 cli_runner.go:211] docker network inspect force-systemd-flag-928059 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:12:39.963866  168533 network_create.go:284] running [docker network inspect force-systemd-flag-928059] to gather additional debugging logs...
	I1124 14:12:39.963883  168533 cli_runner.go:164] Run: docker network inspect force-systemd-flag-928059
	W1124 14:12:39.980363  168533 cli_runner.go:211] docker network inspect force-systemd-flag-928059 returned with exit code 1
	I1124 14:12:39.980396  168533 network_create.go:287] error running [docker network inspect force-systemd-flag-928059]: docker network inspect force-systemd-flag-928059: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-928059 not found
	I1124 14:12:39.980411  168533 network_create.go:289] output of [docker network inspect force-systemd-flag-928059]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-928059 not found
	
	** /stderr **
	I1124 14:12:39.980517  168533 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:12:39.996436  168533 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3087ee9f269 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:07:60:94:e6:54} reservation:<nil>}
	I1124 14:12:39.996739  168533 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-87dca5a19352 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:6c:c1:85:45:94} reservation:<nil>}
	I1124 14:12:39.997036  168533 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9e995bd1b79e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:f1:73:f5:6f:cf} reservation:<nil>}
	I1124 14:12:39.997355  168533 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-91d34f833e5a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:34:a1:7d:09:f3} reservation:<nil>}
	I1124 14:12:39.997759  168533 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a10f50}
	I1124 14:12:39.997786  168533 network_create.go:124] attempt to create docker network force-systemd-flag-928059 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 14:12:39.997853  168533 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-928059 force-systemd-flag-928059
	I1124 14:12:40.080039  168533 network_create.go:108] docker network force-systemd-flag-928059 192.168.85.0/24 created
	I1124 14:12:40.080092  168533 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-928059" container
	I1124 14:12:40.080168  168533 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:12:40.100659  168533 cli_runner.go:164] Run: docker volume create force-systemd-flag-928059 --label name.minikube.sigs.k8s.io=force-systemd-flag-928059 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:12:40.118843  168533 oci.go:103] Successfully created a docker volume force-systemd-flag-928059
	I1124 14:12:40.118949  168533 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-928059-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-928059 --entrypoint /usr/bin/test -v force-systemd-flag-928059:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:12:40.679052  168533 oci.go:107] Successfully prepared a docker volume force-systemd-flag-928059
	I1124 14:12:40.679115  168533 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:12:40.679128  168533 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:12:40.679200  168533 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-928059:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.703238227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.724460998Z" level=info msg="Created container 4026e5c2f2cf1ab6bf194371e28d50a39a1daa43fcf0ab9101aa80b4a440331f: kube-system/etcd-pause-007087/etcd" id=8a90caca-d071-4cf6-939e-e0a420ffc7de name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.726145294Z" level=info msg="Starting container: 4026e5c2f2cf1ab6bf194371e28d50a39a1daa43fcf0ab9101aa80b4a440331f" id=8b89d9dc-52a8-4859-8d8b-f92e32156504 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.728887185Z" level=info msg="Started container" PID=2379 containerID=4026e5c2f2cf1ab6bf194371e28d50a39a1daa43fcf0ab9101aa80b4a440331f description=kube-system/etcd-pause-007087/etcd id=8b89d9dc-52a8-4859-8d8b-f92e32156504 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f9300f5e92ae9b66f5f6a65e0c5589e76965543b644b51d88efd1b539526c65
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.758958269Z" level=info msg="Created container 2e7813c68b435644db6b5b631c6a51381cbf7671e158fa0cff68b4d62ce0855c: kube-system/kindnet-z2thf/kindnet-cni" id=72811640-f540-4395-bcf4-e61eef79ea06 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.759698602Z" level=info msg="Starting container: 2e7813c68b435644db6b5b631c6a51381cbf7671e158fa0cff68b4d62ce0855c" id=da55c74b-0e32-4d1b-bd70-173e57eea19d name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.761808314Z" level=info msg="Started container" PID=2403 containerID=2e7813c68b435644db6b5b631c6a51381cbf7671e158fa0cff68b4d62ce0855c description=kube-system/kindnet-z2thf/kindnet-cni id=da55c74b-0e32-4d1b-bd70-173e57eea19d name=/runtime.v1.RuntimeService/StartContainer sandboxID=409b4bcad92ff1e3663df36d7de57f1d53049687e5e5472c408c37c8ded199dc
	Nov 24 14:12:24 pause-007087 crio[2076]: time="2025-11-24T14:12:24.081335033Z" level=info msg="Created container 5fcf211b67d41ed778023e86be7ab49fa92d6c1cc001a3c1bec7b4b36589f5ab: kube-system/kube-proxy-rdjsw/kube-proxy" id=52bc38d9-885b-4fc1-b075-038eeef04981 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:12:24 pause-007087 crio[2076]: time="2025-11-24T14:12:24.081993256Z" level=info msg="Starting container: 5fcf211b67d41ed778023e86be7ab49fa92d6c1cc001a3c1bec7b4b36589f5ab" id=6d0cfb88-ae6f-4faa-b638-96ca1a06cf55 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:12:24 pause-007087 crio[2076]: time="2025-11-24T14:12:24.084844039Z" level=info msg="Started container" PID=2404 containerID=5fcf211b67d41ed778023e86be7ab49fa92d6c1cc001a3c1bec7b4b36589f5ab description=kube-system/kube-proxy-rdjsw/kube-proxy id=6d0cfb88-ae6f-4faa-b638-96ca1a06cf55 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d00ea52de5b3fa8fdda4d92a4ed7e48d9b3f7b8d2df2ded3840cbe64140c8d8
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.141762839Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.145351994Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.145394924Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.145419465Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.149957957Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.149999139Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.150025281Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.154218554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.154260072Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.154283941Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.157714465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.157750962Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.15777657Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.161228214Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.161269051Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2e7813c68b435       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   23 seconds ago       Running             kindnet-cni               1                   409b4bcad92ff       kindnet-z2thf                          kube-system
	5fcf211b67d41       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   23 seconds ago       Running             kube-proxy                1                   7d00ea52de5b3       kube-proxy-rdjsw                       kube-system
	4026e5c2f2cf1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   23 seconds ago       Running             etcd                      1                   1f9300f5e92ae       etcd-pause-007087                      kube-system
	d5ce86abc9d8a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   23 seconds ago       Running             kube-apiserver            1                   a0d6446f5f1bc       kube-apiserver-pause-007087            kube-system
	76c82c34484bb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   23 seconds ago       Running             kube-controller-manager   1                   4f286f0c61d9f       kube-controller-manager-pause-007087   kube-system
	7bec6bf85a973       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   23 seconds ago       Running             kube-scheduler            1                   1ad71fa8bd5e2       kube-scheduler-pause-007087            kube-system
	8b84bd360beaa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Running             coredns                   1                   fdb6dd3f1d754       coredns-66bc5c9577-gwt47               kube-system
	2ea4162d56185       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   38 seconds ago       Exited              coredns                   0                   fdb6dd3f1d754       coredns-66bc5c9577-gwt47               kube-system
	190f787cc9b57       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   7d00ea52de5b3       kube-proxy-rdjsw                       kube-system
	f29d9247e0cec       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   409b4bcad92ff       kindnet-z2thf                          kube-system
	957acc83a13c0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   1ad71fa8bd5e2       kube-scheduler-pause-007087            kube-system
	f69c60036ed0a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   4f286f0c61d9f       kube-controller-manager-pause-007087   kube-system
	5f04b8e7804bd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   1f9300f5e92ae       etcd-pause-007087                      kube-system
	dc37c94894a8d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   a0d6446f5f1bc       kube-apiserver-pause-007087            kube-system
	
	
	==> coredns [2ea4162d56185e1c61afd3d7a663045d3c50c97f7dd7ccda92ea411c09006854] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34357 - 42642 "HINFO IN 5064016793608895211.3002910998539418074. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037285172s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8b84bd360beaa1b3826a8723ebe0bc93f98205c8ee561e2303b13f4dc0f5a7c9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52718 - 39233 "HINFO IN 375354111847071670.246084428137399410. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.043508225s
	
	
	==> describe nodes <==
	Name:               pause-007087
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-007087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=pause-007087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_11_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:11:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-007087
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:12:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:12:08 +0000   Mon, 24 Nov 2025 14:11:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:12:08 +0000   Mon, 24 Nov 2025 14:11:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:12:08 +0000   Mon, 24 Nov 2025 14:11:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:12:08 +0000   Mon, 24 Nov 2025 14:12:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-007087
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                719d6cce-d8b3-486e-b679-c2d3627359a4
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gwt47                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     80s
	  kube-system                 etcd-pause-007087                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         86s
	  kube-system                 kindnet-z2thf                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-pause-007087             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-pause-007087    200m (10%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-proxy-rdjsw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-007087             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 78s   kube-proxy       
	  Normal   Starting                 14s   kube-proxy       
	  Normal   Starting                 86s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 86s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  86s   kubelet          Node pause-007087 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    86s   kubelet          Node pause-007087 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     86s   kubelet          Node pause-007087 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s   node-controller  Node pause-007087 event: Registered Node pause-007087 in Controller
	  Normal   NodeReady                39s   kubelet          Node pause-007087 status is now: NodeReady
	  Warning  ContainerGCFailed        26s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           13s   node-controller  Node pause-007087 event: Registered Node pause-007087 in Controller
	
	
	==> dmesg <==
	[Nov24 13:42] overlayfs: idmapped layers are currently not supported
	[Nov24 13:43] overlayfs: idmapped layers are currently not supported
	[  +3.897493] overlayfs: idmapped layers are currently not supported
	[Nov24 13:44] overlayfs: idmapped layers are currently not supported
	[  +3.266311] hrtimer: interrupt took 21855280 ns
	[Nov24 13:45] overlayfs: idmapped layers are currently not supported
	[Nov24 13:46] overlayfs: idmapped layers are currently not supported
	[Nov24 13:52] overlayfs: idmapped layers are currently not supported
	[ +31.432146] overlayfs: idmapped layers are currently not supported
	[Nov24 13:53] overlayfs: idmapped layers are currently not supported
	[Nov24 13:54] overlayfs: idmapped layers are currently not supported
	[Nov24 13:56] overlayfs: idmapped layers are currently not supported
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4026e5c2f2cf1ab6bf194371e28d50a39a1daa43fcf0ab9101aa80b4a440331f] <==
	{"level":"warn","ts":"2025-11-24T14:12:27.971404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.057257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.110030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.191639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.247445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.267048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.291896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.368170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.409679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.455745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.512770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.533487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.577877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.612490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.658484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.732462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.879615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.880784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.915704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.967641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:29.041584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:29.080924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:29.110620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:29.163418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:29.349927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44316","server-name":"","error":"EOF"}
	
	
	==> etcd [5f04b8e7804bd26b2d04ccb2b3138341d474bf2459719c4eddb573df39dbb7c5] <==
	{"level":"warn","ts":"2025-11-24T14:11:17.141223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.156491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.177701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.219859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.233249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.258547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.380454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48936","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T14:12:13.096194Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T14:12:13.096256Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-007087","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-24T14:12:13.096351Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T14:12:13.096609Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T14:12:13.239491Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T14:12:13.239555Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T14:12:13.239639Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T14:12:13.239671Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T14:12:13.239680Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T14:12:13.239650Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T14:12:13.239702Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T14:12:13.239727Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-24T14:12:13.239772Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T14:12:13.239785Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T14:12:13.243114Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-24T14:12:13.243203Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T14:12:13.243235Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T14:12:13.243244Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-007087","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 14:12:47 up  1:55,  0 user,  load average: 3.07, 2.41, 2.12
	Linux pause-007087 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2e7813c68b435644db6b5b631c6a51381cbf7671e158fa0cff68b4d62ce0855c] <==
	I1124 14:12:23.916455       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:12:23.921184       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:12:23.923553       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:12:23.923605       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:12:23.923645       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:12:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:12:24.140921       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:12:24.140995       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:12:24.141029       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:12:24.141178       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:12:31.147871       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1124 14:12:31.243464       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:12:31.243558       1 metrics.go:72] Registering metrics
	I1124 14:12:31.243643       1 controller.go:711] "Syncing nftables rules"
	I1124 14:12:34.141418       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:12:34.141477       1 main.go:301] handling current node
	I1124 14:12:44.140923       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:12:44.141004       1 main.go:301] handling current node
	
	
	==> kindnet [f29d9247e0cecb84740fabf059644f3deb00691f26554c62425089202ee4784a] <==
	I1124 14:11:27.716221       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:11:27.716900       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:11:27.722445       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:11:27.722477       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:11:27.722499       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:11:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:11:27.933823       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:11:27.933855       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:11:27.933865       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:11:27.934654       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:11:57.934482       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:11:57.934596       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:11:57.934723       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:11:57.934785       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:11:59.535343       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:11:59.535596       1 metrics.go:72] Registering metrics
	I1124 14:11:59.535668       1 controller.go:711] "Syncing nftables rules"
	I1124 14:12:07.937062       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:12:07.937116       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d5ce86abc9d8a0d28e82df1fd675752db7a03c95cf7f8bd47507d4329d359af5] <==
	I1124 14:12:30.982397       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:12:31.063273       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:12:31.063428       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:12:31.105695       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 14:12:31.105772       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:12:31.162969       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 14:12:31.163069       1 policy_source.go:240] refreshing policies
	I1124 14:12:31.167089       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 14:12:31.170991       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:12:31.176746       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 14:12:31.176765       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 14:12:31.178933       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 14:12:31.189704       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:12:31.189859       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 14:12:31.190077       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:12:31.190258       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:12:31.206957       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1124 14:12:31.210559       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:12:31.234155       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 14:12:31.889462       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:12:33.101811       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:12:34.369339       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:12:34.440138       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:12:34.687984       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:12:34.740328       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [dc37c94894a8d5ee7bf3e1b55fffde9e281f4c1dda3b940aca34d6c4af33d588] <==
	W1124 14:12:13.120592       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120613       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120663       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120714       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120745       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120796       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120846       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120896       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120945       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120994       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121120       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120715       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121316       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121378       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121287       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121499       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121563       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121439       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121648       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121721       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121780       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121469       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121620       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.122102       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.122486       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [76c82c34484bb53fc1c279f6d1eb1687216631d32b8457d880b2c185a6175dbd] <==
	I1124 14:12:34.360352       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:12:34.360359       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:12:34.364293       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 14:12:34.365687       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:12:34.370365       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 14:12:34.370449       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:12:34.370523       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-007087"
	I1124 14:12:34.370563       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:12:34.371023       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 14:12:34.373751       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:12:34.376302       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:12:34.380711       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:12:34.380774       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:12:34.381846       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:12:34.381929       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:12:34.381981       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 14:12:34.382016       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:12:34.383303       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 14:12:34.389934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:12:34.390025       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:12:34.390056       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:12:34.391008       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:12:34.391115       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:12:34.395513       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:12:34.402547       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-controller-manager [f69c60036ed0a1883223128524650e3b4bafdba6ddee0a04f7b68d1b82f62adb] <==
	I1124 14:11:26.386201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 14:11:26.386234       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 14:11:26.386551       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 14:11:26.386622       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:11:26.386670       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:11:26.386710       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:11:26.397302       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-007087" podCIDRs=["10.244.0.0/24"]
	I1124 14:11:26.400973       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:11:26.421305       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:11:26.423713       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:11:26.427490       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:11:26.431885       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:11:26.432027       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:11:26.432084       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:11:26.432097       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:11:26.432108       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:11:26.432119       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:11:26.440711       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:11:26.444066       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:11:26.444255       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:11:26.444289       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:11:26.444336       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:11:26.447456       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:11:26.452786       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:12:11.388203       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [190f787cc9b57074c59aeec3510206ebd7988052da942a584ab393775b5ab95e] <==
	I1124 14:11:28.421852       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:11:28.509689       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:11:28.610440       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:11:28.610474       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:11:28.610542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:11:28.631704       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:11:28.631761       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:11:28.635441       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:11:28.635743       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:11:28.635817       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:11:28.640406       1 config.go:200] "Starting service config controller"
	I1124 14:11:28.640495       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:11:28.640540       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:11:28.640568       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:11:28.640604       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:11:28.640630       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:11:28.641690       1 config.go:309] "Starting node config controller"
	I1124 14:11:28.642615       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:11:28.642680       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:11:28.741478       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:11:28.741592       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:11:28.741659       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [5fcf211b67d41ed778023e86be7ab49fa92d6c1cc001a3c1bec7b4b36589f5ab] <==
	I1124 14:12:28.619529       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:12:31.460393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:12:31.696314       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:12:31.696378       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:12:31.696473       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:12:32.464838       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:12:32.464976       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:12:32.471558       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:12:32.471940       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:12:32.472217       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:12:32.473603       1 config.go:200] "Starting service config controller"
	I1124 14:12:32.489743       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:12:32.489880       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:12:32.489910       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:12:32.489947       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:12:32.489973       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:12:32.490756       1 config.go:309] "Starting node config controller"
	I1124 14:12:32.490835       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:12:32.490868       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:12:32.591419       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:12:32.626010       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:12:32.626075       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7bec6bf85a9734e71b9d06193b900b58d7d8467d320c984cf1a526c52bcc77c3] <==
	I1124 14:12:26.662215       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:12:32.838833       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:12:32.838936       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:12:32.847922       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:12:32.848175       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:12:32.848238       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:12:32.848296       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:12:32.867846       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:12:32.875654       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:12:32.875761       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:12:32.875807       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:12:32.948379       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 14:12:32.976546       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:12:32.976694       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [957acc83a13c0ca073c6f33777b9bd8699a34d773dde4dfad58149f8ebc53ada] <==
	E1124 14:11:19.814311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:11:19.814462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:11:19.814505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:11:19.814540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 14:11:19.814578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 14:11:19.814611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:11:19.814645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:11:19.814679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:11:19.814711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 14:11:19.814743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 14:11:19.814798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:11:19.814831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 14:11:19.814865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 14:11:19.814951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:11:19.814982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 14:11:19.815017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 14:11:19.817578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:11:19.820661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1124 14:11:20.879422       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:12:13.090554       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 14:12:13.090600       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:12:13.090601       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 14:12:13.090618       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 14:12:13.090762       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 14:12:13.090777       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.576067    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b7bbf56258ba9f18243e15d31ecfaaab" pod="kube-system/kube-controller-manager-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: I1124 14:12:23.615127    1311 scope.go:117] "RemoveContainer" containerID="190f787cc9b57074c59aeec3510206ebd7988052da942a584ab393775b5ab95e"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.622220    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gwt47\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c69586cb-c2a9-4f7a-b6f8-cb1619cdeab2" pod="kube-system/coredns-66bc5c9577-gwt47"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.622460    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbb1c59dc901ce0e11ae2d021e4ee1a3" pod="kube-system/kube-scheduler-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.622623    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="90af4cfd87c3f9c666ba6c00ed39c9e2" pod="kube-system/etcd-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.622785    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="295589c48cf97a4f3c8390e963978fe8" pod="kube-system/kube-apiserver-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.622939    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b7bbf56258ba9f18243e15d31ecfaaab" pod="kube-system/kube-controller-manager-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.623281    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdjsw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="21c70dc9-5b77-4fca-91a1-07f91d701426" pod="kube-system/kube-proxy-rdjsw"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: I1124 14:12:23.636082    1311 scope.go:117] "RemoveContainer" containerID="f29d9247e0cecb84740fabf059644f3deb00691f26554c62425089202ee4784a"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.636619    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gwt47\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c69586cb-c2a9-4f7a-b6f8-cb1619cdeab2" pod="kube-system/coredns-66bc5c9577-gwt47"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.636783    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbb1c59dc901ce0e11ae2d021e4ee1a3" pod="kube-system/kube-scheduler-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.636920    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="90af4cfd87c3f9c666ba6c00ed39c9e2" pod="kube-system/etcd-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.637056    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="295589c48cf97a4f3c8390e963978fe8" pod="kube-system/kube-apiserver-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.637193    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b7bbf56258ba9f18243e15d31ecfaaab" pod="kube-system/kube-controller-manager-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.637325    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-z2thf\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b9875adf-ee35-420a-88ef-166ae56bbed5" pod="kube-system/kindnet-z2thf"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.637458    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdjsw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="21c70dc9-5b77-4fca-91a1-07f91d701426" pod="kube-system/kube-proxy-rdjsw"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: E1124 14:12:31.034458    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-007087\" is forbidden: User \"system:node:pause-007087\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-007087' and this object" podUID="295589c48cf97a4f3c8390e963978fe8" pod="kube-system/kube-apiserver-pause-007087"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: E1124 14:12:31.034712    1311 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-007087\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-007087' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: E1124 14:12:31.034738    1311 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-007087\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-007087' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: E1124 14:12:31.034751    1311 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-007087\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-007087' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: E1124 14:12:31.140788    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-007087\" is forbidden: User \"system:node:pause-007087\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-007087' and this object" podUID="b7bbf56258ba9f18243e15d31ecfaaab" pod="kube-system/kube-controller-manager-pause-007087"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: W1124 14:12:31.625443    1311 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 24 14:12:43 pause-007087 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:12:44 pause-007087 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:12:44 pause-007087 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-007087 -n pause-007087
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-007087 -n pause-007087: exit status 2 (521.070434ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-007087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-007087
helpers_test.go:243: (dbg) docker inspect pause-007087:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5",
	        "Created": "2025-11-24T14:10:53.646189475Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 162187,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:10:53.718365572Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5/hosts",
	        "LogPath": "/var/lib/docker/containers/eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5/eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5-json.log",
	        "Name": "/pause-007087",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-007087:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-007087",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eade3cbbfb41d4e4cfb010c3e35b00ea58d124c3bdbd3ff7cac154bbebc1d0b5",
	                "LowerDir": "/var/lib/docker/overlay2/8c4e521782a36aada5fff59d5bec29ce59a596b4f3ea1eaaf9b9b986c7ad9ed5-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8c4e521782a36aada5fff59d5bec29ce59a596b4f3ea1eaaf9b9b986c7ad9ed5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8c4e521782a36aada5fff59d5bec29ce59a596b4f3ea1eaaf9b9b986c7ad9ed5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8c4e521782a36aada5fff59d5bec29ce59a596b4f3ea1eaaf9b9b986c7ad9ed5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-007087",
	                "Source": "/var/lib/docker/volumes/pause-007087/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-007087",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-007087",
	                "name.minikube.sigs.k8s.io": "pause-007087",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09d3749193968070e3270ccf8f86c2078a55f4fc7b5c7815809878e14a1d9a76",
	            "SandboxKey": "/var/run/docker/netns/09d374919396",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-007087": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:26:f4:26:0c:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "91d34f833e5a794c1a8c34d58d3b051bc5abdc8b5bf166590ca1a2b006806749",
	                    "EndpointID": "6093e0b493baa05b823a7a1aeb73f4e2bcc516edce2d5cc92cba075b357c39e9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-007087",
	                        "eade3cbbfb41"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-007087 -n pause-007087
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-007087 -n pause-007087: exit status 2 (346.529267ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-007087 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-007087 logs -n 25: (1.457012497s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p missing-upgrade-593066                                                                                                                │ missing-upgrade-593066    │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ start   │ -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:07 UTC │
	│ stop    │ -p kubernetes-upgrade-610110                                                                                                             │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:07 UTC │ 24 Nov 25 14:07 UTC │
	│ start   │ -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:07 UTC │ 24 Nov 25 14:12 UTC │
	│ delete  │ -p NoKubernetes-637834                                                                                                                   │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:07 UTC │ 24 Nov 25 14:07 UTC │
	│ start   │ -p NoKubernetes-637834 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:07 UTC │ 24 Nov 25 14:07 UTC │
	│ ssh     │ -p NoKubernetes-637834 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:07 UTC │                     │
	│ stop    │ -p NoKubernetes-637834                                                                                                                   │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:08 UTC │ 24 Nov 25 14:08 UTC │
	│ start   │ -p NoKubernetes-637834 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:08 UTC │ 24 Nov 25 14:08 UTC │
	│ ssh     │ -p NoKubernetes-637834 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:08 UTC │                     │
	│ delete  │ -p NoKubernetes-637834                                                                                                                   │ NoKubernetes-637834       │ jenkins │ v1.37.0 │ 24 Nov 25 14:08 UTC │ 24 Nov 25 14:08 UTC │
	│ start   │ -p stopped-upgrade-189175 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-189175    │ jenkins │ v1.32.0 │ 24 Nov 25 14:08 UTC │ 24 Nov 25 14:09 UTC │
	│ stop    │ stopped-upgrade-189175 stop                                                                                                              │ stopped-upgrade-189175    │ jenkins │ v1.32.0 │ 24 Nov 25 14:09 UTC │ 24 Nov 25 14:09 UTC │
	│ start   │ -p stopped-upgrade-189175 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-189175    │ jenkins │ v1.37.0 │ 24 Nov 25 14:09 UTC │ 24 Nov 25 14:09 UTC │
	│ delete  │ -p stopped-upgrade-189175                                                                                                                │ stopped-upgrade-189175    │ jenkins │ v1.37.0 │ 24 Nov 25 14:09 UTC │ 24 Nov 25 14:09 UTC │
	│ start   │ -p running-upgrade-668851 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-668851    │ jenkins │ v1.32.0 │ 24 Nov 25 14:09 UTC │ 24 Nov 25 14:10 UTC │
	│ start   │ -p running-upgrade-668851 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-668851    │ jenkins │ v1.37.0 │ 24 Nov 25 14:10 UTC │ 24 Nov 25 14:10 UTC │
	│ delete  │ -p running-upgrade-668851                                                                                                                │ running-upgrade-668851    │ jenkins │ v1.37.0 │ 24 Nov 25 14:10 UTC │ 24 Nov 25 14:10 UTC │
	│ start   │ -p pause-007087 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-007087              │ jenkins │ v1.37.0 │ 24 Nov 25 14:10 UTC │ 24 Nov 25 14:12 UTC │
	│ start   │ -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │ 24 Nov 25 14:12 UTC │
	│ start   │ -p pause-007087 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-007087              │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │ 24 Nov 25 14:12 UTC │
	│ delete  │ -p kubernetes-upgrade-610110                                                                                                             │ kubernetes-upgrade-610110 │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │ 24 Nov 25 14:12 UTC │
	│ start   │ -p force-systemd-flag-928059 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-928059 │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │                     │
	│ pause   │ -p pause-007087 --alsologtostderr -v=5                                                                                                   │ pause-007087              │ jenkins │ v1.37.0 │ 24 Nov 25 14:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:12:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:12:39.696393  168533 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:12:39.696601  168533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:12:39.696628  168533 out.go:374] Setting ErrFile to fd 2...
	I1124 14:12:39.696655  168533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:12:39.697071  168533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:12:39.697631  168533 out.go:368] Setting JSON to false
	I1124 14:12:39.698717  168533 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6911,"bootTime":1763986649,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:12:39.698857  168533 start.go:143] virtualization:  
	I1124 14:12:39.702496  168533 out.go:179] * [force-systemd-flag-928059] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:12:39.705036  168533 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:12:39.705181  168533 notify.go:221] Checking for updates...
	I1124 14:12:39.711442  168533 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:12:39.714678  168533 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:12:39.717720  168533 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:12:39.720740  168533 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:12:39.723770  168533 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:12:39.727410  168533 config.go:182] Loaded profile config "pause-007087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:12:39.727519  168533 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:12:39.756267  168533 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:12:39.756400  168533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:12:39.823132  168533 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 14:12:39.813068773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:12:39.823238  168533 docker.go:319] overlay module found
	I1124 14:12:39.826559  168533 out.go:179] * Using the docker driver based on user configuration
	I1124 14:12:39.829510  168533 start.go:309] selected driver: docker
	I1124 14:12:39.829528  168533 start.go:927] validating driver "docker" against <nil>
	I1124 14:12:39.829542  168533 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:12:39.830287  168533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:12:39.903520  168533 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 14:12:39.893214671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:12:39.903680  168533 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:12:39.903918  168533 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 14:12:39.906976  168533 out.go:179] * Using Docker driver with root privileges
	I1124 14:12:39.909861  168533 cni.go:84] Creating CNI manager for ""
	I1124 14:12:39.909934  168533 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:12:39.909948  168533 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:12:39.910105  168533 start.go:353] cluster config:
	{Name:force-systemd-flag-928059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-928059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:12:39.913253  168533 out.go:179] * Starting "force-systemd-flag-928059" primary control-plane node in "force-systemd-flag-928059" cluster
	I1124 14:12:39.916166  168533 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:12:39.919194  168533 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:12:39.922214  168533 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:12:39.922265  168533 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:12:39.922274  168533 cache.go:65] Caching tarball of preloaded images
	I1124 14:12:39.922359  168533 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:12:39.922369  168533 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:12:39.922494  168533 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/force-systemd-flag-928059/config.json ...
	I1124 14:12:39.922513  168533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/force-systemd-flag-928059/config.json: {Name:mkc6e555cb8558e370f3b1cb5d7b1c239d0b3402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:12:39.922670  168533 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:12:39.943196  168533 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:12:39.943217  168533 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:12:39.943233  168533 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:12:39.943263  168533 start.go:360] acquireMachinesLock for force-systemd-flag-928059: {Name:mk69c9a6f0f0c69847d09d52b02915ec79b546e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:12:39.943417  168533 start.go:364] duration metric: took 138.094µs to acquireMachinesLock for "force-systemd-flag-928059"
	I1124 14:12:39.943451  168533 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-928059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-928059 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:12:39.943525  168533 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:12:38.084748  166961 pod_ready.go:94] pod "etcd-pause-007087" is "Ready"
	I1124 14:12:38.084788  166961 pod_ready.go:86] duration metric: took 2.008454436s for pod "etcd-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:38.088097  166961 pod_ready.go:83] waiting for pod "kube-apiserver-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 14:12:40.095777  166961 pod_ready.go:104] pod "kube-apiserver-pause-007087" is not "Ready", error: <nil>
	W1124 14:12:42.113924  166961 pod_ready.go:104] pod "kube-apiserver-pause-007087" is not "Ready", error: <nil>
	I1124 14:12:43.099269  166961 pod_ready.go:94] pod "kube-apiserver-pause-007087" is "Ready"
	I1124 14:12:43.099291  166961 pod_ready.go:86] duration metric: took 5.011163612s for pod "kube-apiserver-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.102691  166961 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.108688  166961 pod_ready.go:94] pod "kube-controller-manager-pause-007087" is "Ready"
	I1124 14:12:43.108715  166961 pod_ready.go:86] duration metric: took 5.997701ms for pod "kube-controller-manager-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.111744  166961 pod_ready.go:83] waiting for pod "kube-proxy-rdjsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.117305  166961 pod_ready.go:94] pod "kube-proxy-rdjsw" is "Ready"
	I1124 14:12:43.117334  166961 pod_ready.go:86] duration metric: took 5.563809ms for pod "kube-proxy-rdjsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.119780  166961 pod_ready.go:83] waiting for pod "kube-scheduler-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.292381  166961 pod_ready.go:94] pod "kube-scheduler-pause-007087" is "Ready"
	I1124 14:12:43.292408  166961 pod_ready.go:86] duration metric: took 172.60527ms for pod "kube-scheduler-pause-007087" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:12:43.292420  166961 pod_ready.go:40] duration metric: took 10.230465001s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:12:43.369827  166961 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:12:43.378252  166961 out.go:179] * Done! kubectl is now configured to use "pause-007087" cluster and "default" namespace by default
	I1124 14:12:39.946988  168533 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:12:39.947220  168533 start.go:159] libmachine.API.Create for "force-systemd-flag-928059" (driver="docker")
	I1124 14:12:39.947254  168533 client.go:173] LocalClient.Create starting
	I1124 14:12:39.947393  168533 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem
	I1124 14:12:39.947433  168533 main.go:143] libmachine: Decoding PEM data...
	I1124 14:12:39.947459  168533 main.go:143] libmachine: Parsing certificate...
	I1124 14:12:39.947518  168533 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem
	I1124 14:12:39.947544  168533 main.go:143] libmachine: Decoding PEM data...
	I1124 14:12:39.947560  168533 main.go:143] libmachine: Parsing certificate...
	I1124 14:12:39.947940  168533 cli_runner.go:164] Run: docker network inspect force-systemd-flag-928059 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:12:39.963776  168533 cli_runner.go:211] docker network inspect force-systemd-flag-928059 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:12:39.963866  168533 network_create.go:284] running [docker network inspect force-systemd-flag-928059] to gather additional debugging logs...
	I1124 14:12:39.963883  168533 cli_runner.go:164] Run: docker network inspect force-systemd-flag-928059
	W1124 14:12:39.980363  168533 cli_runner.go:211] docker network inspect force-systemd-flag-928059 returned with exit code 1
	I1124 14:12:39.980396  168533 network_create.go:287] error running [docker network inspect force-systemd-flag-928059]: docker network inspect force-systemd-flag-928059: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-928059 not found
	I1124 14:12:39.980411  168533 network_create.go:289] output of [docker network inspect force-systemd-flag-928059]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-928059 not found
	
	** /stderr **
	I1124 14:12:39.980517  168533 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:12:39.996436  168533 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3087ee9f269 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:07:60:94:e6:54} reservation:<nil>}
	I1124 14:12:39.996739  168533 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-87dca5a19352 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:6c:c1:85:45:94} reservation:<nil>}
	I1124 14:12:39.997036  168533 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9e995bd1b79e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:f1:73:f5:6f:cf} reservation:<nil>}
	I1124 14:12:39.997355  168533 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-91d34f833e5a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:34:a1:7d:09:f3} reservation:<nil>}
	I1124 14:12:39.997759  168533 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a10f50}
	I1124 14:12:39.997786  168533 network_create.go:124] attempt to create docker network force-systemd-flag-928059 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 14:12:39.997853  168533 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-928059 force-systemd-flag-928059
	I1124 14:12:40.080039  168533 network_create.go:108] docker network force-systemd-flag-928059 192.168.85.0/24 created
	I1124 14:12:40.080092  168533 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-928059" container
	I1124 14:12:40.080168  168533 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:12:40.100659  168533 cli_runner.go:164] Run: docker volume create force-systemd-flag-928059 --label name.minikube.sigs.k8s.io=force-systemd-flag-928059 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:12:40.118843  168533 oci.go:103] Successfully created a docker volume force-systemd-flag-928059
	I1124 14:12:40.118949  168533 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-928059-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-928059 --entrypoint /usr/bin/test -v force-systemd-flag-928059:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:12:40.679052  168533 oci.go:107] Successfully prepared a docker volume force-systemd-flag-928059
	I1124 14:12:40.679115  168533 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:12:40.679128  168533 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:12:40.679200  168533 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-928059:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.703238227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.724460998Z" level=info msg="Created container 4026e5c2f2cf1ab6bf194371e28d50a39a1daa43fcf0ab9101aa80b4a440331f: kube-system/etcd-pause-007087/etcd" id=8a90caca-d071-4cf6-939e-e0a420ffc7de name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.726145294Z" level=info msg="Starting container: 4026e5c2f2cf1ab6bf194371e28d50a39a1daa43fcf0ab9101aa80b4a440331f" id=8b89d9dc-52a8-4859-8d8b-f92e32156504 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.728887185Z" level=info msg="Started container" PID=2379 containerID=4026e5c2f2cf1ab6bf194371e28d50a39a1daa43fcf0ab9101aa80b4a440331f description=kube-system/etcd-pause-007087/etcd id=8b89d9dc-52a8-4859-8d8b-f92e32156504 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f9300f5e92ae9b66f5f6a65e0c5589e76965543b644b51d88efd1b539526c65
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.758958269Z" level=info msg="Created container 2e7813c68b435644db6b5b631c6a51381cbf7671e158fa0cff68b4d62ce0855c: kube-system/kindnet-z2thf/kindnet-cni" id=72811640-f540-4395-bcf4-e61eef79ea06 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.759698602Z" level=info msg="Starting container: 2e7813c68b435644db6b5b631c6a51381cbf7671e158fa0cff68b4d62ce0855c" id=da55c74b-0e32-4d1b-bd70-173e57eea19d name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:12:23 pause-007087 crio[2076]: time="2025-11-24T14:12:23.761808314Z" level=info msg="Started container" PID=2403 containerID=2e7813c68b435644db6b5b631c6a51381cbf7671e158fa0cff68b4d62ce0855c description=kube-system/kindnet-z2thf/kindnet-cni id=da55c74b-0e32-4d1b-bd70-173e57eea19d name=/runtime.v1.RuntimeService/StartContainer sandboxID=409b4bcad92ff1e3663df36d7de57f1d53049687e5e5472c408c37c8ded199dc
	Nov 24 14:12:24 pause-007087 crio[2076]: time="2025-11-24T14:12:24.081335033Z" level=info msg="Created container 5fcf211b67d41ed778023e86be7ab49fa92d6c1cc001a3c1bec7b4b36589f5ab: kube-system/kube-proxy-rdjsw/kube-proxy" id=52bc38d9-885b-4fc1-b075-038eeef04981 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:12:24 pause-007087 crio[2076]: time="2025-11-24T14:12:24.081993256Z" level=info msg="Starting container: 5fcf211b67d41ed778023e86be7ab49fa92d6c1cc001a3c1bec7b4b36589f5ab" id=6d0cfb88-ae6f-4faa-b638-96ca1a06cf55 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:12:24 pause-007087 crio[2076]: time="2025-11-24T14:12:24.084844039Z" level=info msg="Started container" PID=2404 containerID=5fcf211b67d41ed778023e86be7ab49fa92d6c1cc001a3c1bec7b4b36589f5ab description=kube-system/kube-proxy-rdjsw/kube-proxy id=6d0cfb88-ae6f-4faa-b638-96ca1a06cf55 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d00ea52de5b3fa8fdda4d92a4ed7e48d9b3f7b8d2df2ded3840cbe64140c8d8
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.141762839Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.145351994Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.145394924Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.145419465Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.149957957Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.149999139Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.150025281Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.154218554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.154260072Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.154283941Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.157714465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.157750962Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.15777657Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.161228214Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:12:34 pause-007087 crio[2076]: time="2025-11-24T14:12:34.161269051Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2e7813c68b435       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   25 seconds ago       Running             kindnet-cni               1                   409b4bcad92ff       kindnet-z2thf                          kube-system
	5fcf211b67d41       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   25 seconds ago       Running             kube-proxy                1                   7d00ea52de5b3       kube-proxy-rdjsw                       kube-system
	4026e5c2f2cf1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   25 seconds ago       Running             etcd                      1                   1f9300f5e92ae       etcd-pause-007087                      kube-system
	d5ce86abc9d8a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   25 seconds ago       Running             kube-apiserver            1                   a0d6446f5f1bc       kube-apiserver-pause-007087            kube-system
	76c82c34484bb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   25 seconds ago       Running             kube-controller-manager   1                   4f286f0c61d9f       kube-controller-manager-pause-007087   kube-system
	7bec6bf85a973       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   25 seconds ago       Running             kube-scheduler            1                   1ad71fa8bd5e2       kube-scheduler-pause-007087            kube-system
	8b84bd360beaa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   25 seconds ago       Running             coredns                   1                   fdb6dd3f1d754       coredns-66bc5c9577-gwt47               kube-system
	2ea4162d56185       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   40 seconds ago       Exited              coredns                   0                   fdb6dd3f1d754       coredns-66bc5c9577-gwt47               kube-system
	190f787cc9b57       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   7d00ea52de5b3       kube-proxy-rdjsw                       kube-system
	f29d9247e0cec       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   409b4bcad92ff       kindnet-z2thf                          kube-system
	957acc83a13c0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   1ad71fa8bd5e2       kube-scheduler-pause-007087            kube-system
	f69c60036ed0a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   4f286f0c61d9f       kube-controller-manager-pause-007087   kube-system
	5f04b8e7804bd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   1f9300f5e92ae       etcd-pause-007087                      kube-system
	dc37c94894a8d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   a0d6446f5f1bc       kube-apiserver-pause-007087            kube-system
	
	
	==> coredns [2ea4162d56185e1c61afd3d7a663045d3c50c97f7dd7ccda92ea411c09006854] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34357 - 42642 "HINFO IN 5064016793608895211.3002910998539418074. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037285172s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8b84bd360beaa1b3826a8723ebe0bc93f98205c8ee561e2303b13f4dc0f5a7c9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52718 - 39233 "HINFO IN 375354111847071670.246084428137399410. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.043508225s
	
	
	==> describe nodes <==
	Name:               pause-007087
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-007087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=pause-007087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_11_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:11:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-007087
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:12:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:12:08 +0000   Mon, 24 Nov 2025 14:11:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:12:08 +0000   Mon, 24 Nov 2025 14:11:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:12:08 +0000   Mon, 24 Nov 2025 14:11:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:12:08 +0000   Mon, 24 Nov 2025 14:12:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-007087
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                719d6cce-d8b3-486e-b679-c2d3627359a4
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gwt47                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     82s
	  kube-system                 etcd-pause-007087                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         88s
	  kube-system                 kindnet-z2thf                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      82s
	  kube-system                 kube-apiserver-pause-007087             250m (12%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-controller-manager-pause-007087    200m (10%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-rdjsw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-pause-007087             100m (5%)     0 (0%)      0 (0%)           0 (0%)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 81s   kube-proxy       
	  Normal   Starting                 17s   kube-proxy       
	  Normal   Starting                 88s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 88s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s   kubelet          Node pause-007087 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s   kubelet          Node pause-007087 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s   kubelet          Node pause-007087 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           83s   node-controller  Node pause-007087 event: Registered Node pause-007087 in Controller
	  Normal   NodeReady                41s   kubelet          Node pause-007087 status is now: NodeReady
	  Warning  ContainerGCFailed        28s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           15s   node-controller  Node pause-007087 event: Registered Node pause-007087 in Controller
	
	
	==> dmesg <==
	[Nov24 13:42] overlayfs: idmapped layers are currently not supported
	[Nov24 13:43] overlayfs: idmapped layers are currently not supported
	[  +3.897493] overlayfs: idmapped layers are currently not supported
	[Nov24 13:44] overlayfs: idmapped layers are currently not supported
	[  +3.266311] hrtimer: interrupt took 21855280 ns
	[Nov24 13:45] overlayfs: idmapped layers are currently not supported
	[Nov24 13:46] overlayfs: idmapped layers are currently not supported
	[Nov24 13:52] overlayfs: idmapped layers are currently not supported
	[ +31.432146] overlayfs: idmapped layers are currently not supported
	[Nov24 13:53] overlayfs: idmapped layers are currently not supported
	[Nov24 13:54] overlayfs: idmapped layers are currently not supported
	[Nov24 13:56] overlayfs: idmapped layers are currently not supported
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4026e5c2f2cf1ab6bf194371e28d50a39a1daa43fcf0ab9101aa80b4a440331f] <==
	{"level":"warn","ts":"2025-11-24T14:12:27.971404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.057257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.110030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.191639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.247445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.267048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.291896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.368170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.409679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.455745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.512770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.533487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.577877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.612490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.658484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.732462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.879615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.880784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.915704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:28.967641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:29.041584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:29.080924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:29.110620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:29.163418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:12:29.349927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44316","server-name":"","error":"EOF"}
	
	
	==> etcd [5f04b8e7804bd26b2d04ccb2b3138341d474bf2459719c4eddb573df39dbb7c5] <==
	{"level":"warn","ts":"2025-11-24T14:11:17.141223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.156491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.177701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.219859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.233249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.258547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:11:17.380454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48936","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T14:12:13.096194Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T14:12:13.096256Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-007087","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-24T14:12:13.096351Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T14:12:13.096609Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T14:12:13.239491Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T14:12:13.239555Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T14:12:13.239639Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T14:12:13.239671Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T14:12:13.239680Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T14:12:13.239650Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T14:12:13.239702Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T14:12:13.239727Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-24T14:12:13.239772Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-24T14:12:13.239785Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T14:12:13.243114Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-24T14:12:13.243203Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T14:12:13.243235Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T14:12:13.243244Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-007087","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 14:12:49 up  1:55,  0 user,  load average: 2.90, 2.39, 2.12
	Linux pause-007087 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2e7813c68b435644db6b5b631c6a51381cbf7671e158fa0cff68b4d62ce0855c] <==
	I1124 14:12:23.916455       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:12:23.921184       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:12:23.923553       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:12:23.923605       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:12:23.923645       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:12:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:12:24.140921       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:12:24.140995       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:12:24.141029       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:12:24.141178       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:12:31.147871       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1124 14:12:31.243464       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:12:31.243558       1 metrics.go:72] Registering metrics
	I1124 14:12:31.243643       1 controller.go:711] "Syncing nftables rules"
	I1124 14:12:34.141418       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:12:34.141477       1 main.go:301] handling current node
	I1124 14:12:44.140923       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:12:44.141004       1 main.go:301] handling current node
	
	
	==> kindnet [f29d9247e0cecb84740fabf059644f3deb00691f26554c62425089202ee4784a] <==
	I1124 14:11:27.716221       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:11:27.716900       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:11:27.722445       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:11:27.722477       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:11:27.722499       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:11:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:11:27.933823       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:11:27.933855       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:11:27.933865       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:11:27.934654       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:11:57.934482       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:11:57.934596       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:11:57.934723       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:11:57.934785       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:11:59.535343       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:11:59.535596       1 metrics.go:72] Registering metrics
	I1124 14:11:59.535668       1 controller.go:711] "Syncing nftables rules"
	I1124 14:12:07.937062       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:12:07.937116       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d5ce86abc9d8a0d28e82df1fd675752db7a03c95cf7f8bd47507d4329d359af5] <==
	I1124 14:12:30.982397       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:12:31.063273       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:12:31.063428       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:12:31.105695       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 14:12:31.105772       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:12:31.162969       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 14:12:31.163069       1 policy_source.go:240] refreshing policies
	I1124 14:12:31.167089       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 14:12:31.170991       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:12:31.176746       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 14:12:31.176765       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 14:12:31.178933       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 14:12:31.189704       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:12:31.189859       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 14:12:31.190077       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:12:31.190258       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:12:31.206957       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1124 14:12:31.210559       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:12:31.234155       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 14:12:31.889462       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:12:33.101811       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:12:34.369339       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:12:34.440138       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:12:34.687984       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:12:34.740328       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [dc37c94894a8d5ee7bf3e1b55fffde9e281f4c1dda3b940aca34d6c4af33d588] <==
	W1124 14:12:13.120592       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120613       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120663       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120714       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120745       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120796       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120846       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120896       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120945       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120994       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121120       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.120715       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121316       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121378       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121287       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121499       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121563       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121439       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121648       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121721       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121780       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121469       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.121620       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.122102       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1124 14:12:13.122486       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [76c82c34484bb53fc1c279f6d1eb1687216631d32b8457d880b2c185a6175dbd] <==
	I1124 14:12:34.360352       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:12:34.360359       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:12:34.364293       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 14:12:34.365687       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:12:34.370365       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 14:12:34.370449       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:12:34.370523       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-007087"
	I1124 14:12:34.370563       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:12:34.371023       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 14:12:34.373751       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:12:34.376302       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:12:34.380711       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:12:34.380774       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:12:34.381846       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:12:34.381929       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:12:34.381981       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 14:12:34.382016       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:12:34.383303       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 14:12:34.389934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:12:34.390025       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:12:34.390056       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:12:34.391008       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:12:34.391115       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:12:34.395513       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:12:34.402547       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-controller-manager [f69c60036ed0a1883223128524650e3b4bafdba6ddee0a04f7b68d1b82f62adb] <==
	I1124 14:11:26.386201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 14:11:26.386234       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 14:11:26.386551       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 14:11:26.386622       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:11:26.386670       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:11:26.386710       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:11:26.397302       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-007087" podCIDRs=["10.244.0.0/24"]
	I1124 14:11:26.400973       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:11:26.421305       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:11:26.423713       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:11:26.427490       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:11:26.431885       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:11:26.432027       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:11:26.432084       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:11:26.432097       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:11:26.432108       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:11:26.432119       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:11:26.440711       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:11:26.444066       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:11:26.444255       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:11:26.444289       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:11:26.444336       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:11:26.447456       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:11:26.452786       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:12:11.388203       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [190f787cc9b57074c59aeec3510206ebd7988052da942a584ab393775b5ab95e] <==
	I1124 14:11:28.421852       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:11:28.509689       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:11:28.610440       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:11:28.610474       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:11:28.610542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:11:28.631704       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:11:28.631761       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:11:28.635441       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:11:28.635743       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:11:28.635817       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:11:28.640406       1 config.go:200] "Starting service config controller"
	I1124 14:11:28.640495       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:11:28.640540       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:11:28.640568       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:11:28.640604       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:11:28.640630       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:11:28.641690       1 config.go:309] "Starting node config controller"
	I1124 14:11:28.642615       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:11:28.642680       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:11:28.741478       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:11:28.741592       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:11:28.741659       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [5fcf211b67d41ed778023e86be7ab49fa92d6c1cc001a3c1bec7b4b36589f5ab] <==
	I1124 14:12:28.619529       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:12:31.460393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:12:31.696314       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:12:31.696378       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:12:31.696473       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:12:32.464838       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:12:32.464976       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:12:32.471558       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:12:32.471940       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:12:32.472217       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:12:32.473603       1 config.go:200] "Starting service config controller"
	I1124 14:12:32.489743       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:12:32.489880       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:12:32.489910       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:12:32.489947       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:12:32.489973       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:12:32.490756       1 config.go:309] "Starting node config controller"
	I1124 14:12:32.490835       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:12:32.490868       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:12:32.591419       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:12:32.626010       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:12:32.626075       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7bec6bf85a9734e71b9d06193b900b58d7d8467d320c984cf1a526c52bcc77c3] <==
	I1124 14:12:26.662215       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:12:32.838833       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:12:32.838936       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:12:32.847922       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:12:32.848175       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:12:32.848238       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:12:32.848296       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:12:32.867846       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:12:32.875654       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:12:32.875761       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:12:32.875807       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:12:32.948379       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 14:12:32.976546       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:12:32.976694       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [957acc83a13c0ca073c6f33777b9bd8699a34d773dde4dfad58149f8ebc53ada] <==
	E1124 14:11:19.814311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:11:19.814462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:11:19.814505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:11:19.814540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 14:11:19.814578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 14:11:19.814611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:11:19.814645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:11:19.814679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:11:19.814711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 14:11:19.814743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 14:11:19.814798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:11:19.814831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 14:11:19.814865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 14:11:19.814951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:11:19.814982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 14:11:19.815017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 14:11:19.817578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:11:19.820661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1124 14:11:20.879422       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:12:13.090554       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 14:12:13.090600       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:12:13.090601       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 14:12:13.090618       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 14:12:13.090762       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 14:12:13.090777       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.576067    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b7bbf56258ba9f18243e15d31ecfaaab" pod="kube-system/kube-controller-manager-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: I1124 14:12:23.615127    1311 scope.go:117] "RemoveContainer" containerID="190f787cc9b57074c59aeec3510206ebd7988052da942a584ab393775b5ab95e"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.622220    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gwt47\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c69586cb-c2a9-4f7a-b6f8-cb1619cdeab2" pod="kube-system/coredns-66bc5c9577-gwt47"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.622460    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbb1c59dc901ce0e11ae2d021e4ee1a3" pod="kube-system/kube-scheduler-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.622623    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="90af4cfd87c3f9c666ba6c00ed39c9e2" pod="kube-system/etcd-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.622785    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="295589c48cf97a4f3c8390e963978fe8" pod="kube-system/kube-apiserver-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.622939    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b7bbf56258ba9f18243e15d31ecfaaab" pod="kube-system/kube-controller-manager-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.623281    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdjsw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="21c70dc9-5b77-4fca-91a1-07f91d701426" pod="kube-system/kube-proxy-rdjsw"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: I1124 14:12:23.636082    1311 scope.go:117] "RemoveContainer" containerID="f29d9247e0cecb84740fabf059644f3deb00691f26554c62425089202ee4784a"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.636619    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gwt47\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c69586cb-c2a9-4f7a-b6f8-cb1619cdeab2" pod="kube-system/coredns-66bc5c9577-gwt47"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.636783    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bbb1c59dc901ce0e11ae2d021e4ee1a3" pod="kube-system/kube-scheduler-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.636920    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="90af4cfd87c3f9c666ba6c00ed39c9e2" pod="kube-system/etcd-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.637056    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="295589c48cf97a4f3c8390e963978fe8" pod="kube-system/kube-apiserver-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.637193    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-007087\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b7bbf56258ba9f18243e15d31ecfaaab" pod="kube-system/kube-controller-manager-pause-007087"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.637325    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-z2thf\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b9875adf-ee35-420a-88ef-166ae56bbed5" pod="kube-system/kindnet-z2thf"
	Nov 24 14:12:23 pause-007087 kubelet[1311]: E1124 14:12:23.637458    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdjsw\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="21c70dc9-5b77-4fca-91a1-07f91d701426" pod="kube-system/kube-proxy-rdjsw"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: E1124 14:12:31.034458    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-007087\" is forbidden: User \"system:node:pause-007087\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-007087' and this object" podUID="295589c48cf97a4f3c8390e963978fe8" pod="kube-system/kube-apiserver-pause-007087"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: E1124 14:12:31.034712    1311 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-007087\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-007087' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: E1124 14:12:31.034738    1311 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-007087\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-007087' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: E1124 14:12:31.034751    1311 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-007087\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-007087' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: E1124 14:12:31.140788    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-007087\" is forbidden: User \"system:node:pause-007087\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-007087' and this object" podUID="b7bbf56258ba9f18243e15d31ecfaaab" pod="kube-system/kube-controller-manager-pause-007087"
	Nov 24 14:12:31 pause-007087 kubelet[1311]: W1124 14:12:31.625443    1311 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 24 14:12:43 pause-007087 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:12:44 pause-007087 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:12:44 pause-007087 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-007087 -n pause-007087
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-007087 -n pause-007087: exit status 2 (439.989105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-007087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-706771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-706771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (265.670164ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:15:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-706771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-706771 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-706771 describe deploy/metrics-server -n kube-system: exit status 1 (78.062892ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-706771 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-706771
helpers_test.go:243: (dbg) docker inspect old-k8s-version-706771:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5",
	        "Created": "2025-11-24T14:14:37.23388933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 181205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:14:37.286583893Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/hosts",
	        "LogPath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5-json.log",
	        "Name": "/old-k8s-version-706771",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-706771:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-706771",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5",
	                "LowerDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846/merged",
	                "UpperDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846/diff",
	                "WorkDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-706771",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-706771/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-706771",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-706771",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-706771",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "646acdc270e003863b624c1f990e02a6872c43b61948a9a2401d7c6fdb145e2b",
	            "SandboxKey": "/var/run/docker/netns/646acdc270e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-706771": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:65:ad:d3:32:62",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e96418466e2f103f798236cd2dcf5c79e483562bd7b0670ad5747c94e35ac056",
	                    "EndpointID": "f8c2c163357b4a7cb8c318b666ae5ad2fce13e707cf53ec19a6e890f5cc1e356",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-706771",
	                        "2c35ba6c5942"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706771 -n old-k8s-version-706771
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-706771 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-706771 logs -n 25: (1.199225141s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-626991 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo containerd config dump                                                                                                                                                                                                  │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo crio config                                                                                                                                                                                                             │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ delete  │ -p cilium-626991                                                                                                                                                                                                                              │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p force-systemd-env-289577 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-289577  │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ ssh     │ force-systemd-flag-928059 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-928059 │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ delete  │ -p force-systemd-flag-928059                                                                                                                                                                                                                  │ force-systemd-flag-928059 │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-032076    │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p force-systemd-env-289577                                                                                                                                                                                                                   │ force-systemd-env-289577  │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p cert-options-097221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:14 UTC │
	│ ssh     │ cert-options-097221 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ ssh     │ -p cert-options-097221 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p cert-options-097221                                                                                                                                                                                                                        │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:14:30
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:14:30.937155  180819 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:14:30.937307  180819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:14:30.937320  180819 out.go:374] Setting ErrFile to fd 2...
	I1124 14:14:30.937325  180819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:14:30.937602  180819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:14:30.938046  180819 out.go:368] Setting JSON to false
	I1124 14:14:30.938962  180819 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7022,"bootTime":1763986649,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:14:30.939031  180819 start.go:143] virtualization:  
	I1124 14:14:30.944252  180819 out.go:179] * [old-k8s-version-706771] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:14:30.947021  180819 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:14:30.947098  180819 notify.go:221] Checking for updates...
	I1124 14:14:30.953009  180819 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:14:30.955706  180819 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:14:30.958481  180819 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:14:30.961342  180819 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:14:30.964073  180819 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:14:30.967488  180819 config.go:182] Loaded profile config "cert-expiration-032076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:14:30.967631  180819 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:14:30.999670  180819 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:14:30.999823  180819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:14:31.060077  180819 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:14:31.04990501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:14:31.060185  180819 docker.go:319] overlay module found
	I1124 14:14:31.063287  180819 out.go:179] * Using the docker driver based on user configuration
	I1124 14:14:31.066129  180819 start.go:309] selected driver: docker
	I1124 14:14:31.066154  180819 start.go:927] validating driver "docker" against <nil>
	I1124 14:14:31.066169  180819 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:14:31.066941  180819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:14:31.124180  180819 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:14:31.114263544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:14:31.124334  180819 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:14:31.124562  180819 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:14:31.127453  180819 out.go:179] * Using Docker driver with root privileges
	I1124 14:14:31.130267  180819 cni.go:84] Creating CNI manager for ""
	I1124 14:14:31.130345  180819 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:14:31.130358  180819 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:14:31.130436  180819 start.go:353] cluster config:
	{Name:old-k8s-version-706771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:14:31.133668  180819 out.go:179] * Starting "old-k8s-version-706771" primary control-plane node in "old-k8s-version-706771" cluster
	I1124 14:14:31.136435  180819 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:14:31.139485  180819 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:14:31.142348  180819 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 14:14:31.142375  180819 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:14:31.142404  180819 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1124 14:14:31.142420  180819 cache.go:65] Caching tarball of preloaded images
	I1124 14:14:31.142499  180819 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:14:31.142509  180819 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1124 14:14:31.142638  180819 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/config.json ...
	I1124 14:14:31.142665  180819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/config.json: {Name:mk5a9c56786171d650a583d9bc3aaede4834c8f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:14:31.170481  180819 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:14:31.170506  180819 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:14:31.170526  180819 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:14:31.170557  180819 start.go:360] acquireMachinesLock for old-k8s-version-706771: {Name:mk711f4c72b219775cdb44b18881f9cc36cbc056 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:14:31.170676  180819 start.go:364] duration metric: took 99.217µs to acquireMachinesLock for "old-k8s-version-706771"
	I1124 14:14:31.170705  180819 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-706771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706771 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:14:31.170821  180819 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:14:31.174190  180819 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:14:31.174460  180819 start.go:159] libmachine.API.Create for "old-k8s-version-706771" (driver="docker")
	I1124 14:14:31.174501  180819 client.go:173] LocalClient.Create starting
	I1124 14:14:31.174587  180819 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem
	I1124 14:14:31.174624  180819 main.go:143] libmachine: Decoding PEM data...
	I1124 14:14:31.174644  180819 main.go:143] libmachine: Parsing certificate...
	I1124 14:14:31.174703  180819 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem
	I1124 14:14:31.174727  180819 main.go:143] libmachine: Decoding PEM data...
	I1124 14:14:31.174745  180819 main.go:143] libmachine: Parsing certificate...
	I1124 14:14:31.175112  180819 cli_runner.go:164] Run: docker network inspect old-k8s-version-706771 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:14:31.191900  180819 cli_runner.go:211] docker network inspect old-k8s-version-706771 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:14:31.191993  180819 network_create.go:284] running [docker network inspect old-k8s-version-706771] to gather additional debugging logs...
	I1124 14:14:31.192016  180819 cli_runner.go:164] Run: docker network inspect old-k8s-version-706771
	W1124 14:14:31.207377  180819 cli_runner.go:211] docker network inspect old-k8s-version-706771 returned with exit code 1
	I1124 14:14:31.207423  180819 network_create.go:287] error running [docker network inspect old-k8s-version-706771]: docker network inspect old-k8s-version-706771: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-706771 not found
	I1124 14:14:31.207436  180819 network_create.go:289] output of [docker network inspect old-k8s-version-706771]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-706771 not found
	
	** /stderr **
	I1124 14:14:31.207556  180819 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:14:31.224054  180819 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3087ee9f269 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:07:60:94:e6:54} reservation:<nil>}
	I1124 14:14:31.224424  180819 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-87dca5a19352 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:6c:c1:85:45:94} reservation:<nil>}
	I1124 14:14:31.224853  180819 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9e995bd1b79e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:f1:73:f5:6f:cf} reservation:<nil>}
	I1124 14:14:31.225285  180819 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a09590}
	I1124 14:14:31.225305  180819 network_create.go:124] attempt to create docker network old-k8s-version-706771 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:14:31.225368  180819 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-706771 old-k8s-version-706771
	I1124 14:14:31.298664  180819 network_create.go:108] docker network old-k8s-version-706771 192.168.76.0/24 created
	I1124 14:14:31.298692  180819 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-706771" container
	I1124 14:14:31.298761  180819 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:14:31.315000  180819 cli_runner.go:164] Run: docker volume create old-k8s-version-706771 --label name.minikube.sigs.k8s.io=old-k8s-version-706771 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:14:31.333574  180819 oci.go:103] Successfully created a docker volume old-k8s-version-706771
	I1124 14:14:31.333662  180819 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-706771-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-706771 --entrypoint /usr/bin/test -v old-k8s-version-706771:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:14:31.869779  180819 oci.go:107] Successfully prepared a docker volume old-k8s-version-706771
	I1124 14:14:31.869837  180819 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 14:14:31.869847  180819 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:14:31.869917  180819 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-706771:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:14:37.164305  180819 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-706771:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.294347594s)
	I1124 14:14:37.164337  180819 kic.go:203] duration metric: took 5.294487075s to extract preloaded images to volume ...
	W1124 14:14:37.164477  180819 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 14:14:37.164583  180819 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:14:37.219077  180819 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-706771 --name old-k8s-version-706771 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-706771 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-706771 --network old-k8s-version-706771 --ip 192.168.76.2 --volume old-k8s-version-706771:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:14:37.515712  180819 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Running}}
	I1124 14:14:37.539694  180819 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:14:37.562316  180819 cli_runner.go:164] Run: docker exec old-k8s-version-706771 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:14:37.634212  180819 oci.go:144] the created container "old-k8s-version-706771" has a running status.
	I1124 14:14:37.634239  180819 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa...
	I1124 14:14:37.930804  180819 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:14:37.956958  180819 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:14:37.981118  180819 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:14:37.981141  180819 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-706771 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:14:38.057222  180819 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:14:38.077306  180819 machine.go:94] provisionDockerMachine start ...
	I1124 14:14:38.077409  180819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:14:38.096592  180819 main.go:143] libmachine: Using SSH client type: native
	I1124 14:14:38.096940  180819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1124 14:14:38.096986  180819 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:14:38.097677  180819 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56638->127.0.0.1:33048: read: connection reset by peer
	I1124 14:14:41.255099  180819 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-706771
	
	I1124 14:14:41.255124  180819 ubuntu.go:182] provisioning hostname "old-k8s-version-706771"
	I1124 14:14:41.255217  180819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:14:41.272125  180819 main.go:143] libmachine: Using SSH client type: native
	I1124 14:14:41.272471  180819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1124 14:14:41.272488  180819 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-706771 && echo "old-k8s-version-706771" | sudo tee /etc/hostname
	I1124 14:14:41.437434  180819 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-706771
	
	I1124 14:14:41.437536  180819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:14:41.456402  180819 main.go:143] libmachine: Using SSH client type: native
	I1124 14:14:41.456716  180819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1124 14:14:41.456740  180819 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-706771' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-706771/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-706771' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:14:41.607513  180819 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:14:41.607547  180819 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:14:41.607571  180819 ubuntu.go:190] setting up certificates
	I1124 14:14:41.607589  180819 provision.go:84] configureAuth start
	I1124 14:14:41.607649  180819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706771
	I1124 14:14:41.624965  180819 provision.go:143] copyHostCerts
	I1124 14:14:41.625037  180819 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:14:41.625055  180819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:14:41.625134  180819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:14:41.625241  180819 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:14:41.625251  180819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:14:41.625282  180819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:14:41.625363  180819 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:14:41.625373  180819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:14:41.625402  180819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:14:41.625459  180819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-706771 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-706771]
	I1124 14:14:42.303752  180819 provision.go:177] copyRemoteCerts
	I1124 14:14:42.303837  180819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:14:42.303888  180819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:14:42.322908  180819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:14:42.431238  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:14:42.449239  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 14:14:42.468312  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:14:42.488799  180819 provision.go:87] duration metric: took 881.172264ms to configureAuth
	I1124 14:14:42.488867  180819 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:14:42.489081  180819 config.go:182] Loaded profile config "old-k8s-version-706771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 14:14:42.489185  180819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:14:42.506861  180819 main.go:143] libmachine: Using SSH client type: native
	I1124 14:14:42.507187  180819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1124 14:14:42.507207  180819 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:14:42.815286  180819 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:14:42.815383  180819 machine.go:97] duration metric: took 4.738014447s to provisionDockerMachine
	I1124 14:14:42.815419  180819 client.go:176] duration metric: took 11.640897476s to LocalClient.Create
	I1124 14:14:42.815451  180819 start.go:167] duration metric: took 11.640992066s to libmachine.API.Create "old-k8s-version-706771"
	I1124 14:14:42.815463  180819 start.go:293] postStartSetup for "old-k8s-version-706771" (driver="docker")
	I1124 14:14:42.815474  180819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:14:42.815542  180819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:14:42.815588  180819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:14:42.832978  180819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:14:42.939583  180819 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:14:42.942812  180819 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:14:42.942843  180819 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:14:42.942856  180819 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:14:42.942914  180819 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:14:42.943005  180819 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:14:42.943121  180819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:14:42.950591  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:14:42.974113  180819 start.go:296] duration metric: took 158.634598ms for postStartSetup
	I1124 14:14:42.974548  180819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706771
	I1124 14:14:42.992716  180819 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/config.json ...
	I1124 14:14:42.993053  180819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:14:42.993097  180819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:14:43.011243  180819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:14:43.116432  180819 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:14:43.121162  180819 start.go:128] duration metric: took 11.950324368s to createHost
	I1124 14:14:43.121189  180819 start.go:83] releasing machines lock for "old-k8s-version-706771", held for 11.950500731s
	I1124 14:14:43.121261  180819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706771
	I1124 14:14:43.142832  180819 ssh_runner.go:195] Run: cat /version.json
	I1124 14:14:43.142846  180819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:14:43.142886  180819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:14:43.142905  180819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:14:43.173317  180819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:14:43.174751  180819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:14:43.371554  180819 ssh_runner.go:195] Run: systemctl --version
	I1124 14:14:43.379469  180819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:14:43.420716  180819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:14:43.425156  180819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:14:43.425237  180819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:14:43.454453  180819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:14:43.454486  180819 start.go:496] detecting cgroup driver to use...
	I1124 14:14:43.454537  180819 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:14:43.454617  180819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:14:43.472632  180819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:14:43.487035  180819 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:14:43.487153  180819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:14:43.507285  180819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:14:43.527555  180819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:14:43.646980  180819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:14:43.774072  180819 docker.go:234] disabling docker service ...
	I1124 14:14:43.774138  180819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:14:43.795773  180819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:14:43.809308  180819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:14:43.943936  180819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:14:44.080577  180819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:14:44.096561  180819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:14:44.110965  180819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 14:14:44.111081  180819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:14:44.122689  180819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:14:44.122809  180819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:14:44.132328  180819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:14:44.141388  180819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:14:44.156609  180819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:14:44.165256  180819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:14:44.174964  180819 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:14:44.190958  180819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:14:44.200718  180819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:14:44.208887  180819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:14:44.216436  180819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:14:44.343393  180819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:14:44.528591  180819 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:14:44.528707  180819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:14:44.538062  180819 start.go:564] Will wait 60s for crictl version
	I1124 14:14:44.538205  180819 ssh_runner.go:195] Run: which crictl
	I1124 14:14:44.543828  180819 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:14:44.573158  180819 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:14:44.573254  180819 ssh_runner.go:195] Run: crio --version
	I1124 14:14:44.603832  180819 ssh_runner.go:195] Run: crio --version
	I1124 14:14:44.640448  180819 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1124 14:14:44.643417  180819 cli_runner.go:164] Run: docker network inspect old-k8s-version-706771 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:14:44.659172  180819 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:14:44.663091  180819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:14:44.672956  180819 kubeadm.go:884] updating cluster {Name:old-k8s-version-706771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706771 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:14:44.673071  180819 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 14:14:44.673147  180819 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:14:44.717663  180819 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:14:44.717690  180819 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:14:44.717745  180819 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:14:44.754522  180819 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:14:44.754552  180819 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:14:44.754560  180819 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1124 14:14:44.754659  180819 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-706771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:14:44.754748  180819 ssh_runner.go:195] Run: crio config
	I1124 14:14:44.842529  180819 cni.go:84] Creating CNI manager for ""
	I1124 14:14:44.842562  180819 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:14:44.842586  180819 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:14:44.842608  180819 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-706771 NodeName:old-k8s-version-706771 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:14:44.842760  180819 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-706771"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:14:44.842840  180819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 14:14:44.854692  180819 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:14:44.854780  180819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:14:44.865040  180819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1124 14:14:44.880297  180819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:14:44.894598  180819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1124 14:14:44.909593  180819 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:14:44.913693  180819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:14:44.924936  180819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:14:45.068076  180819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:14:45.090704  180819 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771 for IP: 192.168.76.2
	I1124 14:14:45.090779  180819 certs.go:195] generating shared ca certs ...
	I1124 14:14:45.090815  180819 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:14:45.091023  180819 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:14:45.091106  180819 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:14:45.091151  180819 certs.go:257] generating profile certs ...
	I1124 14:14:45.091256  180819 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.key
	I1124 14:14:45.091293  180819 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt with IP's: []
	I1124 14:14:45.197627  180819 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt ...
	I1124 14:14:45.197667  180819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: {Name:mk6c6ebe8a756f498ae973304361c8b439414320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:14:45.197893  180819 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.key ...
	I1124 14:14:45.197907  180819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.key: {Name:mkd47f968e89eb095eaf293ced0ca16c0d7fdb38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:14:45.197998  180819 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.key.e094559e
	I1124 14:14:45.198014  180819 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.crt.e094559e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 14:14:45.496135  180819 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.crt.e094559e ...
	I1124 14:14:45.496167  180819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.crt.e094559e: {Name:mk05aee6bb152bbe06024e0cce3585d14e6a7696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:14:45.496424  180819 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.key.e094559e ...
	I1124 14:14:45.496443  180819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.key.e094559e: {Name:mke8c2318bb49c43a0e31b0c99d30d5b910654e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:14:45.496571  180819 certs.go:382] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.crt.e094559e -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.crt
	I1124 14:14:45.496674  180819 certs.go:386] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.key.e094559e -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.key
	I1124 14:14:45.496771  180819 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/proxy-client.key
	I1124 14:14:45.496824  180819 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/proxy-client.crt with IP's: []
	I1124 14:14:45.748053  180819 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/proxy-client.crt ...
	I1124 14:14:45.748085  180819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/proxy-client.crt: {Name:mk5732269f84a143b9b263d7018c1b9179e83276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:14:45.748282  180819 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/proxy-client.key ...
	I1124 14:14:45.748296  180819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/proxy-client.key: {Name:mk0f12484b3bb63a28f66dcc056d8a2ea5a68b55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:14:45.748484  180819 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:14:45.748537  180819 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:14:45.748552  180819 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:14:45.748581  180819 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:14:45.748611  180819 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:14:45.748642  180819 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:14:45.748699  180819 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:14:45.749307  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:14:45.774661  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:14:45.795267  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:14:45.815933  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:14:45.835381  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 14:14:45.854281  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 14:14:45.873245  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:14:45.893664  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:14:45.912303  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:14:45.931989  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:14:45.950105  180819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:14:45.972212  180819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:14:45.987421  180819 ssh_runner.go:195] Run: openssl version
	I1124 14:14:45.994394  180819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:14:46.005750  180819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:14:46.011118  180819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:14:46.011234  180819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:14:46.055916  180819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:14:46.065463  180819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:14:46.074146  180819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:14:46.078321  180819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:14:46.078421  180819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:14:46.120402  180819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:14:46.129434  180819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:14:46.138168  180819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:14:46.142296  180819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:14:46.142402  180819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:14:46.184353  180819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:14:46.192985  180819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:14:46.196748  180819 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:14:46.196799  180819 kubeadm.go:401] StartCluster: {Name:old-k8s-version-706771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706771 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:14:46.196873  180819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:14:46.196949  180819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:14:46.225109  180819 cri.go:89] found id: ""
	I1124 14:14:46.225215  180819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:14:46.233645  180819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:14:46.242439  180819 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:14:46.242505  180819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:14:46.251061  180819 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:14:46.251128  180819 kubeadm.go:158] found existing configuration files:
	
	I1124 14:14:46.251213  180819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:14:46.258906  180819 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:14:46.258997  180819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:14:46.266627  180819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:14:46.274853  180819 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:14:46.274918  180819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:14:46.282613  180819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:14:46.292331  180819 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:14:46.292400  180819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:14:46.300495  180819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:14:46.308329  180819 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:14:46.308399  180819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:14:46.316415  180819 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:14:46.366265  180819 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 14:14:46.366577  180819 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:14:46.407023  180819 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:14:46.407100  180819 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:14:46.407139  180819 kubeadm.go:319] OS: Linux
	I1124 14:14:46.407188  180819 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:14:46.407240  180819 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:14:46.407291  180819 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:14:46.407342  180819 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:14:46.407431  180819 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:14:46.407484  180819 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:14:46.407533  180819 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:14:46.407583  180819 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:14:46.407633  180819 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:14:46.490871  180819 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:14:46.491008  180819 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:14:46.491127  180819 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 14:14:46.646849  180819 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:14:46.652086  180819 out.go:252]   - Generating certificates and keys ...
	I1124 14:14:46.652180  180819 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:14:46.652272  180819 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:14:47.119334  180819 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:14:47.517741  180819 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:14:48.368959  180819 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:14:48.567553  180819 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:14:48.881179  180819 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:14:48.881609  180819 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-706771] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:14:49.305647  180819 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:14:49.306022  180819 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-706771] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:14:49.519292  180819 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:14:50.022154  180819 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:14:50.927619  180819 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:14:50.927932  180819 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:14:51.513588  180819 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:14:52.170337  180819 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:14:52.435021  180819 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:14:52.816578  180819 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:14:52.817520  180819 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:14:52.821390  180819 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:14:52.824865  180819 out.go:252]   - Booting up control plane ...
	I1124 14:14:52.824964  180819 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:14:52.825042  180819 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:14:52.828342  180819 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:14:52.845235  180819 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:14:52.846479  180819 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:14:52.846533  180819 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:14:52.990876  180819 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 14:15:00.995227  180819 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.004886 seconds
	I1124 14:15:00.995418  180819 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:15:01.016656  180819 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:15:01.547505  180819 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:15:01.547718  180819 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-706771 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:15:02.061348  180819 kubeadm.go:319] [bootstrap-token] Using token: bhc4kp.o1rpnh1bwbouhd3d
	I1124 14:15:02.064350  180819 out.go:252]   - Configuring RBAC rules ...
	I1124 14:15:02.064507  180819 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:15:02.070284  180819 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:15:02.079920  180819 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:15:02.087563  180819 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:15:02.094113  180819 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:15:02.098734  180819 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:15:02.115931  180819 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:15:02.415219  180819 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:15:02.490627  180819 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:15:02.506147  180819 kubeadm.go:319] 
	I1124 14:15:02.506226  180819 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:15:02.506232  180819 kubeadm.go:319] 
	I1124 14:15:02.506309  180819 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:15:02.506313  180819 kubeadm.go:319] 
	I1124 14:15:02.506337  180819 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:15:02.506959  180819 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:15:02.507018  180819 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:15:02.507023  180819 kubeadm.go:319] 
	I1124 14:15:02.507083  180819 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:15:02.507087  180819 kubeadm.go:319] 
	I1124 14:15:02.507135  180819 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:15:02.507139  180819 kubeadm.go:319] 
	I1124 14:15:02.507190  180819 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:15:02.507266  180819 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:15:02.507335  180819 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:15:02.507339  180819 kubeadm.go:319] 
	I1124 14:15:02.507494  180819 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:15:02.507572  180819 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:15:02.507576  180819 kubeadm.go:319] 
	I1124 14:15:02.507661  180819 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bhc4kp.o1rpnh1bwbouhd3d \
	I1124 14:15:02.507764  180819 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f \
	I1124 14:15:02.507785  180819 kubeadm.go:319] 	--control-plane 
	I1124 14:15:02.507789  180819 kubeadm.go:319] 
	I1124 14:15:02.507873  180819 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:15:02.507877  180819 kubeadm.go:319] 
	I1124 14:15:02.507959  180819 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bhc4kp.o1rpnh1bwbouhd3d \
	I1124 14:15:02.508061  180819 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f 
	I1124 14:15:02.512555  180819 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:15:02.512676  180819 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:15:02.512695  180819 cni.go:84] Creating CNI manager for ""
	I1124 14:15:02.512702  180819 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:15:02.515925  180819 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:15:02.519018  180819 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:15:02.523890  180819 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 14:15:02.523909  180819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:15:02.543306  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:15:03.569673  180819 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.02632748s)
	I1124 14:15:03.569742  180819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:15:03.569879  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:03.569988  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-706771 minikube.k8s.io/updated_at=2025_11_24T14_15_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=old-k8s-version-706771 minikube.k8s.io/primary=true
	I1124 14:15:03.757687  180819 ops.go:34] apiserver oom_adj: -16
	I1124 14:15:03.757903  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:04.258711  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:04.758699  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:05.258514  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:05.757976  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:06.257898  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:06.758512  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:07.258771  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:07.758003  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:08.258506  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:08.757953  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:09.257936  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:09.758341  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:10.257950  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:10.758542  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:11.258936  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:11.758586  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:12.257961  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:12.758683  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:13.258744  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:13.758798  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:14.258833  180819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:15:14.374534  180819 kubeadm.go:1114] duration metric: took 10.804723113s to wait for elevateKubeSystemPrivileges
	I1124 14:15:14.374570  180819 kubeadm.go:403] duration metric: took 28.177774358s to StartCluster
	I1124 14:15:14.374588  180819 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:15:14.374648  180819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:15:14.375710  180819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:15:14.375935  180819 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:15:14.376061  180819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:15:14.376330  180819 config.go:182] Loaded profile config "old-k8s-version-706771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 14:15:14.376366  180819 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:15:14.376422  180819 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-706771"
	I1124 14:15:14.376436  180819 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-706771"
	I1124 14:15:14.376461  180819 host.go:66] Checking if "old-k8s-version-706771" exists ...
	I1124 14:15:14.377172  180819 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:15:14.377528  180819 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-706771"
	I1124 14:15:14.377551  180819 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-706771"
	I1124 14:15:14.377840  180819 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:15:14.379618  180819 out.go:179] * Verifying Kubernetes components...
	I1124 14:15:14.383283  180819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:15:14.411022  180819 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:15:14.413105  180819 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-706771"
	I1124 14:15:14.413147  180819 host.go:66] Checking if "old-k8s-version-706771" exists ...
	I1124 14:15:14.413621  180819 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:15:14.413856  180819 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:15:14.413873  180819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:15:14.413925  180819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:15:14.462555  180819 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:15:14.462581  180819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:15:14.462649  180819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:15:14.465080  180819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:15:14.508700  180819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:15:14.789301  180819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:15:14.789508  180819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:15:14.840680  180819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:15:14.849040  180819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:15:15.661792  180819 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 14:15:15.662308  180819 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-706771" to be "Ready" ...
	I1124 14:15:16.066545  180819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2257699s)
	I1124 14:15:16.066608  180819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.217497043s)
	I1124 14:15:16.082103  180819 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 14:15:16.084892  180819 addons.go:530] duration metric: took 1.708513732s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 14:15:16.167310  180819 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-706771" context rescaled to 1 replicas
	W1124 14:15:17.676879  180819 node_ready.go:57] node "old-k8s-version-706771" has "Ready":"False" status (will retry)
	W1124 14:15:20.166313  180819 node_ready.go:57] node "old-k8s-version-706771" has "Ready":"False" status (will retry)
	W1124 14:15:22.166590  180819 node_ready.go:57] node "old-k8s-version-706771" has "Ready":"False" status (will retry)
	W1124 14:15:24.166780  180819 node_ready.go:57] node "old-k8s-version-706771" has "Ready":"False" status (will retry)
	W1124 14:15:26.666002  180819 node_ready.go:57] node "old-k8s-version-706771" has "Ready":"False" status (will retry)
	I1124 14:15:28.666596  180819 node_ready.go:49] node "old-k8s-version-706771" is "Ready"
	I1124 14:15:28.666628  180819 node_ready.go:38] duration metric: took 13.004190808s for node "old-k8s-version-706771" to be "Ready" ...
	I1124 14:15:28.666642  180819 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:15:28.666701  180819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:15:28.680633  180819 api_server.go:72] duration metric: took 14.304660888s to wait for apiserver process to appear ...
	I1124 14:15:28.680661  180819 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:15:28.680681  180819 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:15:28.689441  180819 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 14:15:28.690933  180819 api_server.go:141] control plane version: v1.28.0
	I1124 14:15:28.690960  180819 api_server.go:131] duration metric: took 10.291283ms to wait for apiserver health ...
	I1124 14:15:28.690970  180819 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:15:28.695386  180819 system_pods.go:59] 8 kube-system pods found
	I1124 14:15:28.695422  180819 system_pods.go:61] "coredns-5dd5756b68-znmnc" [b0bcf3de-2ab4-48ba-b370-da2bf423cfdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:15:28.695435  180819 system_pods.go:61] "etcd-old-k8s-version-706771" [7e887a47-501e-44b0-951b-4a6810aabf89] Running
	I1124 14:15:28.695441  180819 system_pods.go:61] "kindnet-95mv4" [fb45bd9c-2a20-424a-83a6-fa60017b50ac] Running
	I1124 14:15:28.695445  180819 system_pods.go:61] "kube-apiserver-old-k8s-version-706771" [a0e63866-ac6c-4a7f-9731-73fbbd6e45b6] Running
	I1124 14:15:28.695450  180819 system_pods.go:61] "kube-controller-manager-old-k8s-version-706771" [0c324973-9b9a-4122-b54a-383ef4eeb449] Running
	I1124 14:15:28.695454  180819 system_pods.go:61] "kube-proxy-b7d5h" [8ece550d-eacc-4c08-8445-2cf7769e2988] Running
	I1124 14:15:28.695457  180819 system_pods.go:61] "kube-scheduler-old-k8s-version-706771" [c7923b32-5218-4bd1-ac2e-cc167172ca87] Running
	I1124 14:15:28.695465  180819 system_pods.go:61] "storage-provisioner" [3ca62966-d0c0-4e3f-8e48-8afb3c015191] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:15:28.695475  180819 system_pods.go:74] duration metric: took 4.499177ms to wait for pod list to return data ...
	I1124 14:15:28.695496  180819 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:15:28.698296  180819 default_sa.go:45] found service account: "default"
	I1124 14:15:28.698321  180819 default_sa.go:55] duration metric: took 2.81819ms for default service account to be created ...
	I1124 14:15:28.698332  180819 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:15:28.702047  180819 system_pods.go:86] 8 kube-system pods found
	I1124 14:15:28.702147  180819 system_pods.go:89] "coredns-5dd5756b68-znmnc" [b0bcf3de-2ab4-48ba-b370-da2bf423cfdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:15:28.702164  180819 system_pods.go:89] "etcd-old-k8s-version-706771" [7e887a47-501e-44b0-951b-4a6810aabf89] Running
	I1124 14:15:28.702172  180819 system_pods.go:89] "kindnet-95mv4" [fb45bd9c-2a20-424a-83a6-fa60017b50ac] Running
	I1124 14:15:28.702177  180819 system_pods.go:89] "kube-apiserver-old-k8s-version-706771" [a0e63866-ac6c-4a7f-9731-73fbbd6e45b6] Running
	I1124 14:15:28.702205  180819 system_pods.go:89] "kube-controller-manager-old-k8s-version-706771" [0c324973-9b9a-4122-b54a-383ef4eeb449] Running
	I1124 14:15:28.702216  180819 system_pods.go:89] "kube-proxy-b7d5h" [8ece550d-eacc-4c08-8445-2cf7769e2988] Running
	I1124 14:15:28.702220  180819 system_pods.go:89] "kube-scheduler-old-k8s-version-706771" [c7923b32-5218-4bd1-ac2e-cc167172ca87] Running
	I1124 14:15:28.702226  180819 system_pods.go:89] "storage-provisioner" [3ca62966-d0c0-4e3f-8e48-8afb3c015191] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:15:28.702254  180819 retry.go:31] will retry after 289.994894ms: missing components: kube-dns
	I1124 14:15:28.997194  180819 system_pods.go:86] 8 kube-system pods found
	I1124 14:15:28.997229  180819 system_pods.go:89] "coredns-5dd5756b68-znmnc" [b0bcf3de-2ab4-48ba-b370-da2bf423cfdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:15:28.997239  180819 system_pods.go:89] "etcd-old-k8s-version-706771" [7e887a47-501e-44b0-951b-4a6810aabf89] Running
	I1124 14:15:28.997273  180819 system_pods.go:89] "kindnet-95mv4" [fb45bd9c-2a20-424a-83a6-fa60017b50ac] Running
	I1124 14:15:28.997286  180819 system_pods.go:89] "kube-apiserver-old-k8s-version-706771" [a0e63866-ac6c-4a7f-9731-73fbbd6e45b6] Running
	I1124 14:15:28.997291  180819 system_pods.go:89] "kube-controller-manager-old-k8s-version-706771" [0c324973-9b9a-4122-b54a-383ef4eeb449] Running
	I1124 14:15:28.997295  180819 system_pods.go:89] "kube-proxy-b7d5h" [8ece550d-eacc-4c08-8445-2cf7769e2988] Running
	I1124 14:15:28.997299  180819 system_pods.go:89] "kube-scheduler-old-k8s-version-706771" [c7923b32-5218-4bd1-ac2e-cc167172ca87] Running
	I1124 14:15:28.997316  180819 system_pods.go:89] "storage-provisioner" [3ca62966-d0c0-4e3f-8e48-8afb3c015191] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:15:28.997347  180819 retry.go:31] will retry after 331.366024ms: missing components: kube-dns
	I1124 14:15:29.333554  180819 system_pods.go:86] 8 kube-system pods found
	I1124 14:15:29.333590  180819 system_pods.go:89] "coredns-5dd5756b68-znmnc" [b0bcf3de-2ab4-48ba-b370-da2bf423cfdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:15:29.333598  180819 system_pods.go:89] "etcd-old-k8s-version-706771" [7e887a47-501e-44b0-951b-4a6810aabf89] Running
	I1124 14:15:29.333611  180819 system_pods.go:89] "kindnet-95mv4" [fb45bd9c-2a20-424a-83a6-fa60017b50ac] Running
	I1124 14:15:29.333619  180819 system_pods.go:89] "kube-apiserver-old-k8s-version-706771" [a0e63866-ac6c-4a7f-9731-73fbbd6e45b6] Running
	I1124 14:15:29.333628  180819 system_pods.go:89] "kube-controller-manager-old-k8s-version-706771" [0c324973-9b9a-4122-b54a-383ef4eeb449] Running
	I1124 14:15:29.333640  180819 system_pods.go:89] "kube-proxy-b7d5h" [8ece550d-eacc-4c08-8445-2cf7769e2988] Running
	I1124 14:15:29.333653  180819 system_pods.go:89] "kube-scheduler-old-k8s-version-706771" [c7923b32-5218-4bd1-ac2e-cc167172ca87] Running
	I1124 14:15:29.333659  180819 system_pods.go:89] "storage-provisioner" [3ca62966-d0c0-4e3f-8e48-8afb3c015191] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:15:29.333678  180819 retry.go:31] will retry after 297.87262ms: missing components: kube-dns
	I1124 14:15:29.635796  180819 system_pods.go:86] 8 kube-system pods found
	I1124 14:15:29.635832  180819 system_pods.go:89] "coredns-5dd5756b68-znmnc" [b0bcf3de-2ab4-48ba-b370-da2bf423cfdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:15:29.635838  180819 system_pods.go:89] "etcd-old-k8s-version-706771" [7e887a47-501e-44b0-951b-4a6810aabf89] Running
	I1124 14:15:29.635848  180819 system_pods.go:89] "kindnet-95mv4" [fb45bd9c-2a20-424a-83a6-fa60017b50ac] Running
	I1124 14:15:29.635853  180819 system_pods.go:89] "kube-apiserver-old-k8s-version-706771" [a0e63866-ac6c-4a7f-9731-73fbbd6e45b6] Running
	I1124 14:15:29.635857  180819 system_pods.go:89] "kube-controller-manager-old-k8s-version-706771" [0c324973-9b9a-4122-b54a-383ef4eeb449] Running
	I1124 14:15:29.635861  180819 system_pods.go:89] "kube-proxy-b7d5h" [8ece550d-eacc-4c08-8445-2cf7769e2988] Running
	I1124 14:15:29.635865  180819 system_pods.go:89] "kube-scheduler-old-k8s-version-706771" [c7923b32-5218-4bd1-ac2e-cc167172ca87] Running
	I1124 14:15:29.635871  180819 system_pods.go:89] "storage-provisioner" [3ca62966-d0c0-4e3f-8e48-8afb3c015191] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:15:29.635892  180819 retry.go:31] will retry after 537.179759ms: missing components: kube-dns
	I1124 14:15:30.177387  180819 system_pods.go:86] 8 kube-system pods found
	I1124 14:15:30.177426  180819 system_pods.go:89] "coredns-5dd5756b68-znmnc" [b0bcf3de-2ab4-48ba-b370-da2bf423cfdd] Running
	I1124 14:15:30.177433  180819 system_pods.go:89] "etcd-old-k8s-version-706771" [7e887a47-501e-44b0-951b-4a6810aabf89] Running
	I1124 14:15:30.177437  180819 system_pods.go:89] "kindnet-95mv4" [fb45bd9c-2a20-424a-83a6-fa60017b50ac] Running
	I1124 14:15:30.177442  180819 system_pods.go:89] "kube-apiserver-old-k8s-version-706771" [a0e63866-ac6c-4a7f-9731-73fbbd6e45b6] Running
	I1124 14:15:30.177447  180819 system_pods.go:89] "kube-controller-manager-old-k8s-version-706771" [0c324973-9b9a-4122-b54a-383ef4eeb449] Running
	I1124 14:15:30.177451  180819 system_pods.go:89] "kube-proxy-b7d5h" [8ece550d-eacc-4c08-8445-2cf7769e2988] Running
	I1124 14:15:30.177456  180819 system_pods.go:89] "kube-scheduler-old-k8s-version-706771" [c7923b32-5218-4bd1-ac2e-cc167172ca87] Running
	I1124 14:15:30.177460  180819 system_pods.go:89] "storage-provisioner" [3ca62966-d0c0-4e3f-8e48-8afb3c015191] Running
	I1124 14:15:30.177468  180819 system_pods.go:126] duration metric: took 1.479130245s to wait for k8s-apps to be running ...
	I1124 14:15:30.177479  180819 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:15:30.177552  180819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:15:30.191390  180819 system_svc.go:56] duration metric: took 13.86249ms WaitForService to wait for kubelet
	I1124 14:15:30.191430  180819 kubeadm.go:587] duration metric: took 15.815463155s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:15:30.191449  180819 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:15:30.194889  180819 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:15:30.194926  180819 node_conditions.go:123] node cpu capacity is 2
	I1124 14:15:30.194941  180819 node_conditions.go:105] duration metric: took 3.486571ms to run NodePressure ...
	I1124 14:15:30.194955  180819 start.go:242] waiting for startup goroutines ...
	I1124 14:15:30.194963  180819 start.go:247] waiting for cluster config update ...
	I1124 14:15:30.194978  180819 start.go:256] writing updated cluster config ...
	I1124 14:15:30.195278  180819 ssh_runner.go:195] Run: rm -f paused
	I1124 14:15:30.199673  180819 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:15:30.207775  180819 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-znmnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:30.213415  180819 pod_ready.go:94] pod "coredns-5dd5756b68-znmnc" is "Ready"
	I1124 14:15:30.213444  180819 pod_ready.go:86] duration metric: took 5.591144ms for pod "coredns-5dd5756b68-znmnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:30.216919  180819 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:30.222146  180819 pod_ready.go:94] pod "etcd-old-k8s-version-706771" is "Ready"
	I1124 14:15:30.222177  180819 pod_ready.go:86] duration metric: took 5.22044ms for pod "etcd-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:30.225608  180819 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:30.230964  180819 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-706771" is "Ready"
	I1124 14:15:30.231039  180819 pod_ready.go:86] duration metric: took 5.403105ms for pod "kube-apiserver-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:30.235075  180819 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:30.604619  180819 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-706771" is "Ready"
	I1124 14:15:30.604653  180819 pod_ready.go:86] duration metric: took 369.552282ms for pod "kube-controller-manager-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:30.805333  180819 pod_ready.go:83] waiting for pod "kube-proxy-b7d5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:31.204940  180819 pod_ready.go:94] pod "kube-proxy-b7d5h" is "Ready"
	I1124 14:15:31.204967  180819 pod_ready.go:86] duration metric: took 399.608744ms for pod "kube-proxy-b7d5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:31.404815  180819 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:31.804499  180819 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-706771" is "Ready"
	I1124 14:15:31.804529  180819 pod_ready.go:86] duration metric: took 399.684764ms for pod "kube-scheduler-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:15:31.804541  180819 pod_ready.go:40] duration metric: took 1.604832505s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:15:31.870363  180819 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1124 14:15:31.873304  180819 out.go:203] 
	W1124 14:15:31.876240  180819 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 14:15:31.879214  180819 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 14:15:31.883346  180819 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-706771" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 14:15:28 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:28.908544041Z" level=info msg="Created container 64aa25c9e5972a6c96845a5ae9b287650704bc945843217238a2ef51f88e7fd8: kube-system/coredns-5dd5756b68-znmnc/coredns" id=d0511a90-1a2f-4378-ba00-dd936e26ff32 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:15:28 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:28.911555726Z" level=info msg="Starting container: 64aa25c9e5972a6c96845a5ae9b287650704bc945843217238a2ef51f88e7fd8" id=26f4ee5a-ee98-4868-87ba-09a1f66b74b3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:15:28 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:28.91429092Z" level=info msg="Started container" PID=1910 containerID=64aa25c9e5972a6c96845a5ae9b287650704bc945843217238a2ef51f88e7fd8 description=kube-system/coredns-5dd5756b68-znmnc/coredns id=26f4ee5a-ee98-4868-87ba-09a1f66b74b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=32ba48a5e82da3ecfe2e95942b23ffcd83d4e51b9be3c9de2e3c1aa3e214d518
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.894920667Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4e5a17c1-20b2-443e-8bc4-488dfffdf2b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.894993734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.904948472Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:97a40691bfd630ea563522f64b5aaa0676ca597e0f1b0970ff3852877ecaf6f9 UID:1305436e-3503-4029-912d-8c8cf12da01f NetNS:/var/run/netns/cd8e3ea7-abe3-4ec5-bff2-68ec26e4c3c2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000429b18}] Aliases:map[]}"
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.904990188Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.913750311Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:97a40691bfd630ea563522f64b5aaa0676ca597e0f1b0970ff3852877ecaf6f9 UID:1305436e-3503-4029-912d-8c8cf12da01f NetNS:/var/run/netns/cd8e3ea7-abe3-4ec5-bff2-68ec26e4c3c2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000429b18}] Aliases:map[]}"
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.913911905Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.918875423Z" level=info msg="Ran pod sandbox 97a40691bfd630ea563522f64b5aaa0676ca597e0f1b0970ff3852877ecaf6f9 with infra container: default/busybox/POD" id=4e5a17c1-20b2-443e-8bc4-488dfffdf2b6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.920171257Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=be1e5c7b-72b6-45e0-b722-274ea9aeed3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.920292481Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=be1e5c7b-72b6-45e0-b722-274ea9aeed3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.920363489Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=be1e5c7b-72b6-45e0-b722-274ea9aeed3c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.921501077Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ba99aff8-a872-442a-8889-6fb32277a631 name=/runtime.v1.ImageService/PullImage
	Nov 24 14:15:33 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:33.92398035Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:15:35 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:35.784033549Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=ba99aff8-a872-442a-8889-6fb32277a631 name=/runtime.v1.ImageService/PullImage
	Nov 24 14:15:35 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:35.786702379Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ea7cdb75-d6bd-47b2-8210-775459f9ff0a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:15:35 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:35.790003355Z" level=info msg="Creating container: default/busybox/busybox" id=24eedce8-cd82-4bfc-ad87-e9a19fe43a1e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:15:35 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:35.790112542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:15:35 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:35.802888522Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:15:35 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:35.803440126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:15:35 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:35.82128676Z" level=info msg="Created container 9b9090a2c45d50006354e545b39991ea3016f0dad315e34334262a1d2a9558bc: default/busybox/busybox" id=24eedce8-cd82-4bfc-ad87-e9a19fe43a1e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:15:35 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:35.82362579Z" level=info msg="Starting container: 9b9090a2c45d50006354e545b39991ea3016f0dad315e34334262a1d2a9558bc" id=fc34de19-8162-4cf9-b016-8e49d784fa06 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:15:35 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:35.826337188Z" level=info msg="Started container" PID=1968 containerID=9b9090a2c45d50006354e545b39991ea3016f0dad315e34334262a1d2a9558bc description=default/busybox/busybox id=fc34de19-8162-4cf9-b016-8e49d784fa06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=97a40691bfd630ea563522f64b5aaa0676ca597e0f1b0970ff3852877ecaf6f9
	Nov 24 14:15:42 old-k8s-version-706771 crio[841]: time="2025-11-24T14:15:42.27191021Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	9b9090a2c45d5       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   97a40691bfd63       busybox                                          default
	64aa25c9e5972       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      14 seconds ago      Running             coredns                   0                   32ba48a5e82da       coredns-5dd5756b68-znmnc                         kube-system
	7968de1444c98       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   102a30125607d       storage-provisioner                              kube-system
	16e09c9738d27       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   58f3d79fb8ac0       kindnet-95mv4                                    kube-system
	80b7862f090b8       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      28 seconds ago      Running             kube-proxy                0                   8e552ce1c51f9       kube-proxy-b7d5h                                 kube-system
	472120271ac41       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      49 seconds ago      Running             etcd                      0                   ed8542a7f8e06       etcd-old-k8s-version-706771                      kube-system
	9daec69fec979       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      49 seconds ago      Running             kube-apiserver            0                   903a2e066af03       kube-apiserver-old-k8s-version-706771            kube-system
	f7b2d5a3f0a47       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      49 seconds ago      Running             kube-scheduler            0                   4c824417ac113       kube-scheduler-old-k8s-version-706771            kube-system
	a49d1381084ce       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      49 seconds ago      Running             kube-controller-manager   0                   584ee129de46b       kube-controller-manager-old-k8s-version-706771   kube-system
	
	
	==> coredns [64aa25c9e5972a6c96845a5ae9b287650704bc945843217238a2ef51f88e7fd8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36618 - 63702 "HINFO IN 6649542629714282360.6524837846472442133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022188378s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-706771
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-706771
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=old-k8s-version-706771
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_15_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:14:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-706771
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:15:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:15:33 +0000   Mon, 24 Nov 2025 14:14:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:15:33 +0000   Mon, 24 Nov 2025 14:14:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:15:33 +0000   Mon, 24 Nov 2025 14:14:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:15:33 +0000   Mon, 24 Nov 2025 14:15:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-706771
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                14492c3a-8806-4276-8078-fdf3e23d5fc8
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-znmnc                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-706771                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-95mv4                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-706771             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-706771    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-b7d5h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-706771             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-706771 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-706771 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-706771 event: Registered Node old-k8s-version-706771 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-706771 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 13:45] overlayfs: idmapped layers are currently not supported
	[Nov24 13:46] overlayfs: idmapped layers are currently not supported
	[Nov24 13:52] overlayfs: idmapped layers are currently not supported
	[ +31.432146] overlayfs: idmapped layers are currently not supported
	[Nov24 13:53] overlayfs: idmapped layers are currently not supported
	[Nov24 13:54] overlayfs: idmapped layers are currently not supported
	[Nov24 13:56] overlayfs: idmapped layers are currently not supported
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [472120271ac41aee999b619a78b3beee45e256f50ac7cb4293730651c9fb3bed] <==
	{"level":"info","ts":"2025-11-24T14:14:54.818566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-24T14:14:54.818748Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-24T14:14:54.825596Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T14:14:54.825693Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T14:14:54.832905Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T14:14:54.833686Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T14:14:54.833762Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T14:14:55.59107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-24T14:14:55.591187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-24T14:14:55.591227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-24T14:14:55.591274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-24T14:14:55.591305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-24T14:14:55.591346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-24T14:14:55.591396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-24T14:14:55.595569Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-706771 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T14:14:55.595659Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:14:55.596783Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T14:14:55.596915Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:14:55.597262Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:14:55.598177Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-24T14:14:55.59875Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:14:55.598872Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:14:55.598931Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:14:55.600497Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T14:14:55.600565Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:15:43 up  1:58,  0 user,  load average: 2.58, 2.81, 2.34
	Linux old-k8s-version-706771 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [16e09c9738d272dd755882868849446019fb84118efe33b12912197e6f18d01b] <==
	I1124 14:15:18.112105       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:15:18.112579       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:15:18.112777       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:15:18.112819       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:15:18.112852       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:15:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:15:18.318594       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:15:18.318725       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:15:18.318838       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:15:18.319045       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 14:15:18.511444       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:15:18.511545       1 metrics.go:72] Registering metrics
	I1124 14:15:18.511638       1 controller.go:711] "Syncing nftables rules"
	I1124 14:15:28.319450       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:15:28.319511       1 main.go:301] handling current node
	I1124 14:15:38.315499       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:15:38.315540       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9daec69fec979092d99835cb7facbe34c316a5ae6daee539ca0eb850d6ecff2f] <==
	I1124 14:14:58.920572       1 aggregator.go:166] initial CRD sync complete...
	I1124 14:14:58.920579       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 14:14:58.920587       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:14:58.920594       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:14:58.920743       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 14:14:58.920809       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 14:14:58.920997       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 14:14:58.927840       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 14:14:58.924172       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 14:14:58.974855       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:14:59.625845       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:14:59.631393       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:14:59.631496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:15:00.695010       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:15:00.753205       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:15:00.864671       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	I1124 14:15:00.871551       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	W1124 14:15:00.876177       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 14:15:00.877649       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 14:15:00.891782       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:15:02.396490       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 14:15:02.413562       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:15:02.427261       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 14:15:14.387695       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 14:15:14.544961       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a49d1381084ce6e222206ab0a827b1a25a73e144d5d6cd49543b720952c028fa] <==
	I1124 14:15:13.909652       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1124 14:15:13.920744       1 shared_informer.go:318] Caches are synced for disruption
	I1124 14:15:13.927471       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 14:15:14.263041       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 14:15:14.263089       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 14:15:14.278925       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 14:15:14.407723       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1124 14:15:14.571026       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-95mv4"
	I1124 14:15:14.584181       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-b7d5h"
	I1124 14:15:14.786224       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-sjlzm"
	I1124 14:15:14.815554       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-znmnc"
	I1124 14:15:14.859331       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="451.830998ms"
	I1124 14:15:14.892204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.457588ms"
	I1124 14:15:14.892294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.41µs"
	I1124 14:15:15.734147       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 14:15:15.774966       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-sjlzm"
	I1124 14:15:15.796266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.907049ms"
	I1124 14:15:15.897666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.354848ms"
	I1124 14:15:15.955876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.160602ms"
	I1124 14:15:15.956052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.969µs"
	I1124 14:15:28.535872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.938µs"
	I1124 14:15:28.556129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.898µs"
	I1124 14:15:28.724936       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1124 14:15:29.725233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.785099ms"
	I1124 14:15:29.725474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.778µs"
	
	
	==> kube-proxy [80b7862f090b84c6cc23cff87f18aedee74e126cf00dc00a93b04679e2c82e69] <==
	I1124 14:15:15.455335       1 server_others.go:69] "Using iptables proxy"
	I1124 14:15:15.474369       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1124 14:15:15.552250       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:15:15.555385       1 server_others.go:152] "Using iptables Proxier"
	I1124 14:15:15.555420       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 14:15:15.555427       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 14:15:15.555448       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 14:15:15.555736       1 server.go:846] "Version info" version="v1.28.0"
	I1124 14:15:15.555747       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:15:15.556923       1 config.go:188] "Starting service config controller"
	I1124 14:15:15.556932       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 14:15:15.556948       1 config.go:97] "Starting endpoint slice config controller"
	I1124 14:15:15.556952       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 14:15:15.557326       1 config.go:315] "Starting node config controller"
	I1124 14:15:15.557333       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 14:15:15.657700       1 shared_informer.go:318] Caches are synced for node config
	I1124 14:15:15.657729       1 shared_informer.go:318] Caches are synced for service config
	I1124 14:15:15.657757       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f7b2d5a3f0a47e5e1cd79f2f02aeba916c213483e56c8dffcb87be961bb34039] <==
	W1124 14:14:59.907747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1124 14:14:59.907781       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 14:14:59.914577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 14:14:59.914687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 14:14:59.936452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 14:14:59.936489       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 14:15:00.031831       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1124 14:15:00.031983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1124 14:15:00.192261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 14:15:00.192405       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 14:15:00.202701       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 14:15:00.202838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 14:15:00.280818       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 14:15:00.280997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 14:15:00.281138       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 14:15:00.281185       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1124 14:15:00.286625       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1124 14:15:00.286774       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1124 14:15:00.286873       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1124 14:15:00.286927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1124 14:15:00.303933       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1124 14:15:00.304041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1124 14:15:00.342373       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 14:15:00.342521       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1124 14:15:03.486249       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 14:15:14 old-k8s-version-706771 kubelet[1365]: I1124 14:15:14.804335    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4jwg\" (UniqueName: \"kubernetes.io/projected/8ece550d-eacc-4c08-8445-2cf7769e2988-kube-api-access-g4jwg\") pod \"kube-proxy-b7d5h\" (UID: \"8ece550d-eacc-4c08-8445-2cf7769e2988\") " pod="kube-system/kube-proxy-b7d5h"
	Nov 24 14:15:14 old-k8s-version-706771 kubelet[1365]: I1124 14:15:14.804389    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ece550d-eacc-4c08-8445-2cf7769e2988-kube-proxy\") pod \"kube-proxy-b7d5h\" (UID: \"8ece550d-eacc-4c08-8445-2cf7769e2988\") " pod="kube-system/kube-proxy-b7d5h"
	Nov 24 14:15:14 old-k8s-version-706771 kubelet[1365]: W1124 14:15:14.923999    1365 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/crio-58f3d79fb8ac0d72cda956f8d452ab8126a82169af685f6db86ea90fadd4a9c2 WatchSource:0}: Error finding container 58f3d79fb8ac0d72cda956f8d452ab8126a82169af685f6db86ea90fadd4a9c2: Status 404 returned error can't find the container with id 58f3d79fb8ac0d72cda956f8d452ab8126a82169af685f6db86ea90fadd4a9c2
	Nov 24 14:15:15 old-k8s-version-706771 kubelet[1365]: W1124 14:15:15.222362    1365 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/crio-8e552ce1c51f9b7087a5a551ababb42319aea0f85d491922343b025a561a10b7 WatchSource:0}: Error finding container 8e552ce1c51f9b7087a5a551ababb42319aea0f85d491922343b025a561a10b7: Status 404 returned error can't find the container with id 8e552ce1c51f9b7087a5a551ababb42319aea0f85d491922343b025a561a10b7
	Nov 24 14:15:15 old-k8s-version-706771 kubelet[1365]: I1124 14:15:15.703858    1365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-b7d5h" podStartSLOduration=1.70380221 podCreationTimestamp="2025-11-24 14:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:15:15.703709105 +0000 UTC m=+13.354444697" watchObservedRunningTime="2025-11-24 14:15:15.70380221 +0000 UTC m=+13.354537679"
	Nov 24 14:15:22 old-k8s-version-706771 kubelet[1365]: I1124 14:15:22.554866    1365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-95mv4" podStartSLOduration=5.525762069 podCreationTimestamp="2025-11-24 14:15:14 +0000 UTC" firstStartedPulling="2025-11-24 14:15:14.932430843 +0000 UTC m=+12.583166312" lastFinishedPulling="2025-11-24 14:15:17.961491061 +0000 UTC m=+15.612226530" observedRunningTime="2025-11-24 14:15:18.67661526 +0000 UTC m=+16.327350737" watchObservedRunningTime="2025-11-24 14:15:22.554822287 +0000 UTC m=+20.205557756"
	Nov 24 14:15:28 old-k8s-version-706771 kubelet[1365]: I1124 14:15:28.503443    1365 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 14:15:28 old-k8s-version-706771 kubelet[1365]: I1124 14:15:28.534540    1365 topology_manager.go:215] "Topology Admit Handler" podUID="b0bcf3de-2ab4-48ba-b370-da2bf423cfdd" podNamespace="kube-system" podName="coredns-5dd5756b68-znmnc"
	Nov 24 14:15:28 old-k8s-version-706771 kubelet[1365]: I1124 14:15:28.541284    1365 topology_manager.go:215] "Topology Admit Handler" podUID="3ca62966-d0c0-4e3f-8e48-8afb3c015191" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 14:15:28 old-k8s-version-706771 kubelet[1365]: I1124 14:15:28.602582    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0bcf3de-2ab4-48ba-b370-da2bf423cfdd-config-volume\") pod \"coredns-5dd5756b68-znmnc\" (UID: \"b0bcf3de-2ab4-48ba-b370-da2bf423cfdd\") " pod="kube-system/coredns-5dd5756b68-znmnc"
	Nov 24 14:15:28 old-k8s-version-706771 kubelet[1365]: I1124 14:15:28.602639    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3ca62966-d0c0-4e3f-8e48-8afb3c015191-tmp\") pod \"storage-provisioner\" (UID: \"3ca62966-d0c0-4e3f-8e48-8afb3c015191\") " pod="kube-system/storage-provisioner"
	Nov 24 14:15:28 old-k8s-version-706771 kubelet[1365]: I1124 14:15:28.602668    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pxzt\" (UniqueName: \"kubernetes.io/projected/3ca62966-d0c0-4e3f-8e48-8afb3c015191-kube-api-access-7pxzt\") pod \"storage-provisioner\" (UID: \"3ca62966-d0c0-4e3f-8e48-8afb3c015191\") " pod="kube-system/storage-provisioner"
	Nov 24 14:15:28 old-k8s-version-706771 kubelet[1365]: I1124 14:15:28.602695    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92455\" (UniqueName: \"kubernetes.io/projected/b0bcf3de-2ab4-48ba-b370-da2bf423cfdd-kube-api-access-92455\") pod \"coredns-5dd5756b68-znmnc\" (UID: \"b0bcf3de-2ab4-48ba-b370-da2bf423cfdd\") " pod="kube-system/coredns-5dd5756b68-znmnc"
	Nov 24 14:15:28 old-k8s-version-706771 kubelet[1365]: W1124 14:15:28.856515    1365 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/crio-102a30125607d104447b3c785ee4f3b45eedb76b2f82ee6905f113ec40728e39 WatchSource:0}: Error finding container 102a30125607d104447b3c785ee4f3b45eedb76b2f82ee6905f113ec40728e39: Status 404 returned error can't find the container with id 102a30125607d104447b3c785ee4f3b45eedb76b2f82ee6905f113ec40728e39
	Nov 24 14:15:28 old-k8s-version-706771 kubelet[1365]: W1124 14:15:28.865129    1365 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/crio-32ba48a5e82da3ecfe2e95942b23ffcd83d4e51b9be3c9de2e3c1aa3e214d518 WatchSource:0}: Error finding container 32ba48a5e82da3ecfe2e95942b23ffcd83d4e51b9be3c9de2e3c1aa3e214d518: Status 404 returned error can't find the container with id 32ba48a5e82da3ecfe2e95942b23ffcd83d4e51b9be3c9de2e3c1aa3e214d518
	Nov 24 14:15:29 old-k8s-version-706771 kubelet[1365]: I1124 14:15:29.712053    1365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.712012921 podCreationTimestamp="2025-11-24 14:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:15:29.697470513 +0000 UTC m=+27.348206006" watchObservedRunningTime="2025-11-24 14:15:29.712012921 +0000 UTC m=+27.362748398"
	Nov 24 14:15:32 old-k8s-version-706771 kubelet[1365]: I1124 14:15:32.092894    1365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-znmnc" podStartSLOduration=18.092838334 podCreationTimestamp="2025-11-24 14:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:15:29.715839582 +0000 UTC m=+27.366575051" watchObservedRunningTime="2025-11-24 14:15:32.092838334 +0000 UTC m=+29.743573811"
	Nov 24 14:15:32 old-k8s-version-706771 kubelet[1365]: I1124 14:15:32.093264    1365 topology_manager.go:215] "Topology Admit Handler" podUID="1305436e-3503-4029-912d-8c8cf12da01f" podNamespace="default" podName="busybox"
	Nov 24 14:15:32 old-k8s-version-706771 kubelet[1365]: W1124 14:15:32.097235    1365 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-706771" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-706771' and this object
	Nov 24 14:15:32 old-k8s-version-706771 kubelet[1365]: E1124 14:15:32.097297    1365 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-706771" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-706771' and this object
	Nov 24 14:15:32 old-k8s-version-706771 kubelet[1365]: I1124 14:15:32.225966    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8s5q\" (UniqueName: \"kubernetes.io/projected/1305436e-3503-4029-912d-8c8cf12da01f-kube-api-access-n8s5q\") pod \"busybox\" (UID: \"1305436e-3503-4029-912d-8c8cf12da01f\") " pod="default/busybox"
	Nov 24 14:15:33 old-k8s-version-706771 kubelet[1365]: E1124 14:15:33.338223    1365 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:15:33 old-k8s-version-706771 kubelet[1365]: E1124 14:15:33.338274    1365 projected.go:198] Error preparing data for projected volume kube-api-access-n8s5q for pod default/busybox: failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:15:33 old-k8s-version-706771 kubelet[1365]: E1124 14:15:33.338359    1365 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1305436e-3503-4029-912d-8c8cf12da01f-kube-api-access-n8s5q podName:1305436e-3503-4029-912d-8c8cf12da01f nodeName:}" failed. No retries permitted until 2025-11-24 14:15:33.838334829 +0000 UTC m=+31.489070297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n8s5q" (UniqueName: "kubernetes.io/projected/1305436e-3503-4029-912d-8c8cf12da01f-kube-api-access-n8s5q") pod "busybox" (UID: "1305436e-3503-4029-912d-8c8cf12da01f") : failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:15:33 old-k8s-version-706771 kubelet[1365]: W1124 14:15:33.918206    1365 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/crio-97a40691bfd630ea563522f64b5aaa0676ca597e0f1b0970ff3852877ecaf6f9 WatchSource:0}: Error finding container 97a40691bfd630ea563522f64b5aaa0676ca597e0f1b0970ff3852877ecaf6f9: Status 404 returned error can't find the container with id 97a40691bfd630ea563522f64b5aaa0676ca597e0f1b0970ff3852877ecaf6f9
	
	
	==> storage-provisioner [7968de1444c98fc251afa9e6d660917457a835787e63fbd1533e9d44701200e5] <==
	I1124 14:15:28.935167       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:15:28.950704       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:15:28.950824       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 14:15:28.959653       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:15:28.959806       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706771_f30d79b4-92ec-4247-a36a-3b06a4ddcda5!
	I1124 14:15:28.960687       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"83d8df4a-4d76-4886-b058-96eaa24ce4dc", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-706771_f30d79b4-92ec-4247-a36a-3b06a4ddcda5 became leader
	I1124 14:15:29.061919       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706771_f30d79b4-92ec-4247-a36a-3b06a4ddcda5!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-706771 -n old-k8s-version-706771
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-706771 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-706771 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-706771 --alsologtostderr -v=1: exit status 80 (1.85212077s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-706771 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:16:56.478355  186580 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:16:56.478532  186580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:16:56.478557  186580 out.go:374] Setting ErrFile to fd 2...
	I1124 14:16:56.478577  186580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:16:56.478983  186580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:16:56.479314  186580 out.go:368] Setting JSON to false
	I1124 14:16:56.480413  186580 mustload.go:66] Loading cluster: old-k8s-version-706771
	I1124 14:16:56.480897  186580 config.go:182] Loaded profile config "old-k8s-version-706771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 14:16:56.481422  186580 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:16:56.507468  186580 host.go:66] Checking if "old-k8s-version-706771" exists ...
	I1124 14:16:56.508084  186580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:16:56.576257  186580 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 14:16:56.566429058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:16:56.576894  186580 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-706771 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 14:16:56.580062  186580 out.go:179] * Pausing node old-k8s-version-706771 ... 
	I1124 14:16:56.582750  186580 host.go:66] Checking if "old-k8s-version-706771" exists ...
	I1124 14:16:56.583105  186580 ssh_runner.go:195] Run: systemctl --version
	I1124 14:16:56.583151  186580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:56.600528  186580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:16:56.705906  186580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:16:56.719027  186580 pause.go:52] kubelet running: true
	I1124 14:16:56.719131  186580 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:16:56.971117  186580 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:16:56.971201  186580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:16:57.054492  186580 cri.go:89] found id: "fd07626170d21832af144510fb26073eb6f20f7c7dcca410390fc64f77e7f864"
	I1124 14:16:57.054517  186580 cri.go:89] found id: "9169afa245f154ace8b11f56579ef064687ab842b2b449fc8e57beaf11efb2fc"
	I1124 14:16:57.054522  186580 cri.go:89] found id: "662792c1d19f88cbc7c05bc4aa78cf420616277ed272fd2a62c1fd7eea280ac9"
	I1124 14:16:57.054525  186580 cri.go:89] found id: "6bfac4080fd664cdaea1a7c4e8cdce7bf4757d53582813085720bab8e65f5a85"
	I1124 14:16:57.054529  186580 cri.go:89] found id: "3f6d39d6f582e8f8f5d54e4b32d73192d578d30ce39530503930d8ec0e325ccd"
	I1124 14:16:57.054533  186580 cri.go:89] found id: "4462433cacd7fb9b40a0bc0ba0ab22736d2abbc9e6137cbb6e9ad29470cd488b"
	I1124 14:16:57.054536  186580 cri.go:89] found id: "d2ae3d6088d6c2a4d1067c258f51a7ecb4899a0bf6d5568e6df682d6446ca5c7"
	I1124 14:16:57.054539  186580 cri.go:89] found id: "39e49abd246636c62439ed76a4481780faa5610ca54df7818f5d5ec1656a3fc6"
	I1124 14:16:57.054542  186580 cri.go:89] found id: "065e2235590533b061f78ed69d188fc0f922c3d8f7c9e6624ca6b074b6ff8055"
	I1124 14:16:57.054562  186580 cri.go:89] found id: "943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6"
	I1124 14:16:57.054572  186580 cri.go:89] found id: "7cacf35d21de93fc134f70c789842b0bb01f94a9f81d988ca016c0707c19e476"
	I1124 14:16:57.054575  186580 cri.go:89] found id: ""
	I1124 14:16:57.054633  186580 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:16:57.073279  186580 retry.go:31] will retry after 227.135359ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:16:57Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:16:57.300552  186580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:16:57.313662  186580 pause.go:52] kubelet running: false
	I1124 14:16:57.313754  186580 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:16:57.484552  186580 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:16:57.484707  186580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:16:57.565937  186580 cri.go:89] found id: "fd07626170d21832af144510fb26073eb6f20f7c7dcca410390fc64f77e7f864"
	I1124 14:16:57.565962  186580 cri.go:89] found id: "9169afa245f154ace8b11f56579ef064687ab842b2b449fc8e57beaf11efb2fc"
	I1124 14:16:57.565967  186580 cri.go:89] found id: "662792c1d19f88cbc7c05bc4aa78cf420616277ed272fd2a62c1fd7eea280ac9"
	I1124 14:16:57.565971  186580 cri.go:89] found id: "6bfac4080fd664cdaea1a7c4e8cdce7bf4757d53582813085720bab8e65f5a85"
	I1124 14:16:57.565975  186580 cri.go:89] found id: "3f6d39d6f582e8f8f5d54e4b32d73192d578d30ce39530503930d8ec0e325ccd"
	I1124 14:16:57.565978  186580 cri.go:89] found id: "4462433cacd7fb9b40a0bc0ba0ab22736d2abbc9e6137cbb6e9ad29470cd488b"
	I1124 14:16:57.565981  186580 cri.go:89] found id: "d2ae3d6088d6c2a4d1067c258f51a7ecb4899a0bf6d5568e6df682d6446ca5c7"
	I1124 14:16:57.565984  186580 cri.go:89] found id: "39e49abd246636c62439ed76a4481780faa5610ca54df7818f5d5ec1656a3fc6"
	I1124 14:16:57.565987  186580 cri.go:89] found id: "065e2235590533b061f78ed69d188fc0f922c3d8f7c9e6624ca6b074b6ff8055"
	I1124 14:16:57.565993  186580 cri.go:89] found id: "943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6"
	I1124 14:16:57.565996  186580 cri.go:89] found id: "7cacf35d21de93fc134f70c789842b0bb01f94a9f81d988ca016c0707c19e476"
	I1124 14:16:57.565999  186580 cri.go:89] found id: ""
	I1124 14:16:57.566049  186580 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:16:57.580963  186580 retry.go:31] will retry after 393.118129ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:16:57Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:16:57.974307  186580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:16:57.994049  186580 pause.go:52] kubelet running: false
	I1124 14:16:57.994174  186580 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:16:58.167470  186580 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:16:58.167613  186580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:16:58.246637  186580 cri.go:89] found id: "fd07626170d21832af144510fb26073eb6f20f7c7dcca410390fc64f77e7f864"
	I1124 14:16:58.246711  186580 cri.go:89] found id: "9169afa245f154ace8b11f56579ef064687ab842b2b449fc8e57beaf11efb2fc"
	I1124 14:16:58.246730  186580 cri.go:89] found id: "662792c1d19f88cbc7c05bc4aa78cf420616277ed272fd2a62c1fd7eea280ac9"
	I1124 14:16:58.246753  186580 cri.go:89] found id: "6bfac4080fd664cdaea1a7c4e8cdce7bf4757d53582813085720bab8e65f5a85"
	I1124 14:16:58.246788  186580 cri.go:89] found id: "3f6d39d6f582e8f8f5d54e4b32d73192d578d30ce39530503930d8ec0e325ccd"
	I1124 14:16:58.246810  186580 cri.go:89] found id: "4462433cacd7fb9b40a0bc0ba0ab22736d2abbc9e6137cbb6e9ad29470cd488b"
	I1124 14:16:58.246832  186580 cri.go:89] found id: "d2ae3d6088d6c2a4d1067c258f51a7ecb4899a0bf6d5568e6df682d6446ca5c7"
	I1124 14:16:58.246866  186580 cri.go:89] found id: "39e49abd246636c62439ed76a4481780faa5610ca54df7818f5d5ec1656a3fc6"
	I1124 14:16:58.246888  186580 cri.go:89] found id: "065e2235590533b061f78ed69d188fc0f922c3d8f7c9e6624ca6b074b6ff8055"
	I1124 14:16:58.246917  186580 cri.go:89] found id: "943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6"
	I1124 14:16:58.246948  186580 cri.go:89] found id: "7cacf35d21de93fc134f70c789842b0bb01f94a9f81d988ca016c0707c19e476"
	I1124 14:16:58.246971  186580 cri.go:89] found id: ""
	I1124 14:16:58.247053  186580 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:16:58.262245  186580 out.go:203] 
	W1124 14:16:58.265052  186580 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:16:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:16:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 14:16:58.265076  186580 out.go:285] * 
	* 
	W1124 14:16:58.270584  186580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 14:16:58.273795  186580 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-706771 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-706771
helpers_test.go:243: (dbg) docker inspect old-k8s-version-706771:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5",
	        "Created": "2025-11-24T14:14:37.23388933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 184499,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:15:57.514328709Z",
	            "FinishedAt": "2025-11-24T14:15:56.67845261Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/hosts",
	        "LogPath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5-json.log",
	        "Name": "/old-k8s-version-706771",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-706771:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-706771",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5",
	                "LowerDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846/merged",
	                "UpperDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846/diff",
	                "WorkDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-706771",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-706771/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-706771",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-706771",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-706771",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ef8307883d1a7d1659ce14cdd9da7911669f832792e2815eb8e829415ea70fdf",
	            "SandboxKey": "/var/run/docker/netns/ef8307883d1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-706771": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:57:f6:4e:48:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e96418466e2f103f798236cd2dcf5c79e483562bd7b0670ad5747c94e35ac056",
	                    "EndpointID": "063fbd8f0cf65bfa5719910692cc6bac45cf7f022eb7b8a64df536e2b7ea6e82",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-706771",
	                        "2c35ba6c5942"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706771 -n old-k8s-version-706771
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706771 -n old-k8s-version-706771: exit status 2 (345.07994ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-706771 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-706771 logs -n 25: (1.359224499s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-626991 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo containerd config dump                                                                                                                                                                                                  │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo crio config                                                                                                                                                                                                             │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ delete  │ -p cilium-626991                                                                                                                                                                                                                              │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p force-systemd-env-289577 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-289577  │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ ssh     │ force-systemd-flag-928059 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-928059 │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ delete  │ -p force-systemd-flag-928059                                                                                                                                                                                                                  │ force-systemd-flag-928059 │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-032076    │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p force-systemd-env-289577                                                                                                                                                                                                                   │ force-systemd-env-289577  │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p cert-options-097221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:14 UTC │
	│ ssh     │ cert-options-097221 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ ssh     │ -p cert-options-097221 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p cert-options-097221                                                                                                                                                                                                                        │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │                     │
	│ stop    │ -p old-k8s-version-706771 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:16 UTC │
	│ image   │ old-k8s-version-706771 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │ 24 Nov 25 14:16 UTC │
	│ pause   │ -p old-k8s-version-706771 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:15:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:15:57.219692  184373 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:15:57.219864  184373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:15:57.219894  184373 out.go:374] Setting ErrFile to fd 2...
	I1124 14:15:57.219916  184373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:15:57.220182  184373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:15:57.220579  184373 out.go:368] Setting JSON to false
	I1124 14:15:57.221500  184373 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7109,"bootTime":1763986649,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:15:57.221598  184373 start.go:143] virtualization:  
	I1124 14:15:57.226676  184373 out.go:179] * [old-k8s-version-706771] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:15:57.229995  184373 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:15:57.230080  184373 notify.go:221] Checking for updates...
	I1124 14:15:57.237545  184373 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:15:57.240462  184373 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:15:57.243472  184373 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:15:57.246535  184373 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:15:57.249540  184373 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:15:57.253066  184373 config.go:182] Loaded profile config "old-k8s-version-706771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 14:15:57.256650  184373 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1124 14:15:57.259532  184373 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:15:57.288983  184373 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:15:57.289113  184373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:15:57.356135  184373 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:15:57.340950793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:15:57.356255  184373 docker.go:319] overlay module found
	I1124 14:15:57.360446  184373 out.go:179] * Using the docker driver based on existing profile
	I1124 14:15:57.363339  184373 start.go:309] selected driver: docker
	I1124 14:15:57.363506  184373 start.go:927] validating driver "docker" against &{Name:old-k8s-version-706771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706771 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:15:57.363631  184373 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:15:57.364327  184373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:15:57.429427  184373 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:15:57.417979167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:15:57.429793  184373 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:15:57.429821  184373 cni.go:84] Creating CNI manager for ""
	I1124 14:15:57.429875  184373 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:15:57.429919  184373 start.go:353] cluster config:
	{Name:old-k8s-version-706771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:15:57.433065  184373 out.go:179] * Starting "old-k8s-version-706771" primary control-plane node in "old-k8s-version-706771" cluster
	I1124 14:15:57.435857  184373 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:15:57.438672  184373 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:15:57.441514  184373 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 14:15:57.441562  184373 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1124 14:15:57.441577  184373 cache.go:65] Caching tarball of preloaded images
	I1124 14:15:57.441583  184373 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:15:57.441661  184373 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:15:57.441672  184373 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1124 14:15:57.441785  184373 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/config.json ...
	I1124 14:15:57.461078  184373 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:15:57.461100  184373 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:15:57.461118  184373 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:15:57.461146  184373 start.go:360] acquireMachinesLock for old-k8s-version-706771: {Name:mk711f4c72b219775cdb44b18881f9cc36cbc056 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:15:57.461210  184373 start.go:364] duration metric: took 41.404µs to acquireMachinesLock for "old-k8s-version-706771"
	I1124 14:15:57.461235  184373 start.go:96] Skipping create...Using existing machine configuration
	I1124 14:15:57.461241  184373 fix.go:54] fixHost starting: 
	I1124 14:15:57.461517  184373 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:15:57.477884  184373 fix.go:112] recreateIfNeeded on old-k8s-version-706771: state=Stopped err=<nil>
	W1124 14:15:57.477910  184373 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 14:15:57.481153  184373 out.go:252] * Restarting existing docker container for "old-k8s-version-706771" ...
	I1124 14:15:57.481237  184373 cli_runner.go:164] Run: docker start old-k8s-version-706771
	I1124 14:15:57.715319  184373 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:15:57.745507  184373 kic.go:430] container "old-k8s-version-706771" state is running.
	I1124 14:15:57.746097  184373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706771
	I1124 14:15:57.771020  184373 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/config.json ...
	I1124 14:15:57.771320  184373 machine.go:94] provisionDockerMachine start ...
	I1124 14:15:57.771498  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:15:57.793243  184373 main.go:143] libmachine: Using SSH client type: native
	I1124 14:15:57.793564  184373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1124 14:15:57.793579  184373 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:15:57.794178  184373 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:16:00.947021  184373 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-706771
	
	I1124 14:16:00.947045  184373 ubuntu.go:182] provisioning hostname "old-k8s-version-706771"
	I1124 14:16:00.947107  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:00.967659  184373 main.go:143] libmachine: Using SSH client type: native
	I1124 14:16:00.967978  184373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1124 14:16:00.967989  184373 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-706771 && echo "old-k8s-version-706771" | sudo tee /etc/hostname
	I1124 14:16:01.133200  184373 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-706771
	
	I1124 14:16:01.133300  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:01.161854  184373 main.go:143] libmachine: Using SSH client type: native
	I1124 14:16:01.162180  184373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1124 14:16:01.162203  184373 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-706771' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-706771/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-706771' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:16:01.315603  184373 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:16:01.315627  184373 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:16:01.315654  184373 ubuntu.go:190] setting up certificates
	I1124 14:16:01.315671  184373 provision.go:84] configureAuth start
	I1124 14:16:01.315730  184373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706771
	I1124 14:16:01.333977  184373 provision.go:143] copyHostCerts
	I1124 14:16:01.334063  184373 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:16:01.334077  184373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:16:01.334157  184373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:16:01.334264  184373 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:16:01.334270  184373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:16:01.334300  184373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:16:01.334356  184373 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:16:01.334366  184373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:16:01.334390  184373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:16:01.334437  184373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-706771 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-706771]
	I1124 14:16:01.564280  184373 provision.go:177] copyRemoteCerts
	I1124 14:16:01.564353  184373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:16:01.564436  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:01.582205  184373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:16:01.687895  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:16:01.706576  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 14:16:01.725844  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:16:01.743501  184373 provision.go:87] duration metric: took 427.801822ms to configureAuth
	I1124 14:16:01.743526  184373 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:16:01.743706  184373 config.go:182] Loaded profile config "old-k8s-version-706771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 14:16:01.743818  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:01.761289  184373 main.go:143] libmachine: Using SSH client type: native
	I1124 14:16:01.761610  184373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1124 14:16:01.761630  184373 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:16:02.135914  184373 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:16:02.135941  184373 machine.go:97] duration metric: took 4.364604698s to provisionDockerMachine
	I1124 14:16:02.135961  184373 start.go:293] postStartSetup for "old-k8s-version-706771" (driver="docker")
	I1124 14:16:02.135979  184373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:16:02.136052  184373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:16:02.136095  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:02.162373  184373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:16:02.275771  184373 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:16:02.279434  184373 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:16:02.279460  184373 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:16:02.279472  184373 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:16:02.279527  184373 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:16:02.279605  184373 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:16:02.279703  184373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:16:02.287247  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:16:02.306520  184373 start.go:296] duration metric: took 170.542657ms for postStartSetup
	I1124 14:16:02.306678  184373 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:16:02.306746  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:02.324587  184373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:16:02.428779  184373 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:16:02.434064  184373 fix.go:56] duration metric: took 4.972815079s for fixHost
	I1124 14:16:02.434092  184373 start.go:83] releasing machines lock for "old-k8s-version-706771", held for 4.972867822s
	I1124 14:16:02.434186  184373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706771
	I1124 14:16:02.451252  184373 ssh_runner.go:195] Run: cat /version.json
	I1124 14:16:02.451294  184373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:16:02.451300  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:02.451443  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:02.471996  184373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:16:02.472568  184373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:16:02.666350  184373 ssh_runner.go:195] Run: systemctl --version
	I1124 14:16:02.673062  184373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:16:02.711734  184373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:16:02.716484  184373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:16:02.716601  184373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:16:02.724755  184373 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:16:02.724777  184373 start.go:496] detecting cgroup driver to use...
	I1124 14:16:02.724827  184373 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:16:02.724881  184373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:16:02.740436  184373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:16:02.754173  184373 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:16:02.754245  184373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:16:02.770094  184373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:16:02.784014  184373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:16:02.895895  184373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:16:03.026184  184373 docker.go:234] disabling docker service ...
	I1124 14:16:03.026273  184373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:16:03.042182  184373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:16:03.056085  184373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:16:03.169269  184373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:16:03.281602  184373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:16:03.294883  184373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:16:03.311156  184373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 14:16:03.311250  184373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:16:03.324082  184373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:16:03.324165  184373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:16:03.333766  184373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:16:03.344117  184373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:16:03.354420  184373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:16:03.363968  184373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:16:03.373483  184373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:16:03.383029  184373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:16:03.392662  184373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:16:03.400460  184373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:16:03.408130  184373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:16:03.518022  184373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:16:03.696101  184373 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:16:03.696186  184373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:16:03.700411  184373 start.go:564] Will wait 60s for crictl version
	I1124 14:16:03.700491  184373 ssh_runner.go:195] Run: which crictl
	I1124 14:16:03.704058  184373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:16:03.729096  184373 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:16:03.729182  184373 ssh_runner.go:195] Run: crio --version
	I1124 14:16:03.756902  184373 ssh_runner.go:195] Run: crio --version
	I1124 14:16:03.787384  184373 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1124 14:16:03.790317  184373 cli_runner.go:164] Run: docker network inspect old-k8s-version-706771 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:16:03.808028  184373 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:16:03.812647  184373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:16:03.823772  184373 kubeadm.go:884] updating cluster {Name:old-k8s-version-706771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706771 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:16:03.823889  184373 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 14:16:03.823946  184373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:16:03.857665  184373 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:16:03.857691  184373 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:16:03.857745  184373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:16:03.884251  184373 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:16:03.884276  184373 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:16:03.884285  184373 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1124 14:16:03.884405  184373 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-706771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:16:03.884485  184373 ssh_runner.go:195] Run: crio config
	I1124 14:16:03.951063  184373 cni.go:84] Creating CNI manager for ""
	I1124 14:16:03.951088  184373 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:16:03.951116  184373 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:16:03.951141  184373 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-706771 NodeName:old-k8s-version-706771 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:16:03.951282  184373 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-706771"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:16:03.951416  184373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 14:16:03.959681  184373 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:16:03.959784  184373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:16:03.967658  184373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1124 14:16:03.980691  184373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:16:03.993835  184373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1124 14:16:04.008872  184373 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:16:04.012956  184373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:16:04.024189  184373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:16:04.140732  184373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:16:04.162590  184373 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771 for IP: 192.168.76.2
	I1124 14:16:04.162659  184373 certs.go:195] generating shared ca certs ...
	I1124 14:16:04.162691  184373 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:16:04.162876  184373 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:16:04.162962  184373 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:16:04.162986  184373 certs.go:257] generating profile certs ...
	I1124 14:16:04.163091  184373 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.key
	I1124 14:16:04.163201  184373 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.key.e094559e
	I1124 14:16:04.163286  184373 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/proxy-client.key
	I1124 14:16:04.163491  184373 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:16:04.163563  184373 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:16:04.163591  184373 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:16:04.163644  184373 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:16:04.163696  184373 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:16:04.163755  184373 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:16:04.163832  184373 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:16:04.164482  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:16:04.188065  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:16:04.209372  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:16:04.229872  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:16:04.253188  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 14:16:04.275435  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 14:16:04.302969  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:16:04.325736  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:16:04.349158  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:16:04.381600  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:16:04.404820  184373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:16:04.425022  184373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:16:04.439088  184373 ssh_runner.go:195] Run: openssl version
	I1124 14:16:04.445727  184373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:16:04.454736  184373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:16:04.458448  184373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:16:04.458556  184373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:16:04.502058  184373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:16:04.510254  184373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:16:04.518990  184373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:16:04.522699  184373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:16:04.522764  184373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:16:04.565062  184373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:16:04.573126  184373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:16:04.581080  184373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:16:04.584790  184373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:16:04.584882  184373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:16:04.625607  184373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:16:04.633777  184373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:16:04.638000  184373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:16:04.686769  184373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:16:04.728480  184373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:16:04.769601  184373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:16:04.821532  184373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:16:04.875880  184373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:16:04.933874  184373 kubeadm.go:401] StartCluster: {Name:old-k8s-version-706771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706771 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:16:04.934050  184373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:16:04.934144  184373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:16:05.003699  184373 cri.go:89] found id: "d2ae3d6088d6c2a4d1067c258f51a7ecb4899a0bf6d5568e6df682d6446ca5c7"
	I1124 14:16:05.003791  184373 cri.go:89] found id: "39e49abd246636c62439ed76a4481780faa5610ca54df7818f5d5ec1656a3fc6"
	I1124 14:16:05.003816  184373 cri.go:89] found id: "065e2235590533b061f78ed69d188fc0f922c3d8f7c9e6624ca6b074b6ff8055"
	I1124 14:16:05.003839  184373 cri.go:89] found id: ""
	I1124 14:16:05.003918  184373 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 14:16:05.040584  184373 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:16:05Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:16:05.040737  184373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:16:05.052073  184373 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:16:05.052142  184373 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:16:05.052214  184373 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:16:05.060076  184373 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:16:05.060690  184373 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-706771" does not appear in /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:16:05.060977  184373 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-2805/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-706771" cluster setting kubeconfig missing "old-k8s-version-706771" context setting]
	I1124 14:16:05.062159  184373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:16:05.069611  184373 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:16:05.078437  184373 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 14:16:05.078512  184373 kubeadm.go:602] duration metric: took 26.350571ms to restartPrimaryControlPlane
	I1124 14:16:05.078537  184373 kubeadm.go:403] duration metric: took 144.671671ms to StartCluster
	I1124 14:16:05.078580  184373 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:16:05.078662  184373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:16:05.079712  184373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:16:05.079990  184373 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:16:05.080445  184373 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:16:05.080707  184373 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-706771"
	I1124 14:16:05.080773  184373 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-706771"
	W1124 14:16:05.080796  184373 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:16:05.080820  184373 config.go:182] Loaded profile config "old-k8s-version-706771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 14:16:05.080867  184373 host.go:66] Checking if "old-k8s-version-706771" exists ...
	I1124 14:16:05.080934  184373 addons.go:70] Setting dashboard=true in profile "old-k8s-version-706771"
	I1124 14:16:05.080952  184373 addons.go:239] Setting addon dashboard=true in "old-k8s-version-706771"
	W1124 14:16:05.080971  184373 addons.go:248] addon dashboard should already be in state true
	I1124 14:16:05.081025  184373 host.go:66] Checking if "old-k8s-version-706771" exists ...
	I1124 14:16:05.081494  184373 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:16:05.081530  184373 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-706771"
	I1124 14:16:05.081566  184373 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-706771"
	I1124 14:16:05.081851  184373 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:16:05.081495  184373 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:16:05.089511  184373 out.go:179] * Verifying Kubernetes components...
	I1124 14:16:05.095695  184373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:16:05.138498  184373 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-706771"
	W1124 14:16:05.138520  184373 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:16:05.138543  184373 host.go:66] Checking if "old-k8s-version-706771" exists ...
	I1124 14:16:05.139006  184373 cli_runner.go:164] Run: docker container inspect old-k8s-version-706771 --format={{.State.Status}}
	I1124 14:16:05.147533  184373 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 14:16:05.154338  184373 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:16:05.157354  184373 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:16:05.157479  184373 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 14:16:05.157492  184373 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 14:16:05.157559  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:05.160238  184373 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:16:05.160263  184373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:16:05.160323  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:05.183919  184373 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:16:05.183950  184373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:16:05.184017  184373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706771
	I1124 14:16:05.211525  184373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:16:05.223623  184373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:16:05.239584  184373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/old-k8s-version-706771/id_rsa Username:docker}
	I1124 14:16:05.473769  184373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:16:05.491749  184373 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 14:16:05.491776  184373 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 14:16:05.501182  184373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:16:05.536883  184373 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 14:16:05.536913  184373 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 14:16:05.544218  184373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:16:05.544823  184373 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-706771" to be "Ready" ...
	I1124 14:16:05.618864  184373 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 14:16:05.618927  184373 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 14:16:05.678580  184373 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 14:16:05.678610  184373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 14:16:05.767555  184373 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 14:16:05.767580  184373 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 14:16:05.834406  184373 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 14:16:05.834430  184373 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 14:16:05.862596  184373 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 14:16:05.862670  184373 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 14:16:05.889360  184373 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 14:16:05.889439  184373 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 14:16:05.913648  184373 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:16:05.913721  184373 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 14:16:05.937503  184373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:16:10.270941  184373 node_ready.go:49] node "old-k8s-version-706771" is "Ready"
	I1124 14:16:10.270967  184373 node_ready.go:38] duration metric: took 4.726105116s for node "old-k8s-version-706771" to be "Ready" ...
	I1124 14:16:10.270983  184373 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:16:10.271040  184373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:16:12.018647  184373 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.517426308s)
	I1124 14:16:12.018735  184373 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.4744906s)
	I1124 14:16:12.565265  184373 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.627668405s)
	I1124 14:16:12.565478  184373 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.294426112s)
	I1124 14:16:12.565501  184373 api_server.go:72] duration metric: took 7.485461329s to wait for apiserver process to appear ...
	I1124 14:16:12.565507  184373 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:16:12.565523  184373 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:16:12.568459  184373 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-706771 addons enable metrics-server
	
	I1124 14:16:12.571291  184373 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 14:16:12.574199  184373 addons.go:530] duration metric: took 7.49376753s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 14:16:12.575089  184373 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 14:16:12.576547  184373 api_server.go:141] control plane version: v1.28.0
	I1124 14:16:12.576572  184373 api_server.go:131] duration metric: took 11.05897ms to wait for apiserver health ...
	I1124 14:16:12.576581  184373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:16:12.580289  184373 system_pods.go:59] 8 kube-system pods found
	I1124 14:16:12.580325  184373 system_pods.go:61] "coredns-5dd5756b68-znmnc" [b0bcf3de-2ab4-48ba-b370-da2bf423cfdd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:16:12.580334  184373 system_pods.go:61] "etcd-old-k8s-version-706771" [7e887a47-501e-44b0-951b-4a6810aabf89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:16:12.580340  184373 system_pods.go:61] "kindnet-95mv4" [fb45bd9c-2a20-424a-83a6-fa60017b50ac] Running
	I1124 14:16:12.580348  184373 system_pods.go:61] "kube-apiserver-old-k8s-version-706771" [a0e63866-ac6c-4a7f-9731-73fbbd6e45b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:16:12.580360  184373 system_pods.go:61] "kube-controller-manager-old-k8s-version-706771" [0c324973-9b9a-4122-b54a-383ef4eeb449] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:16:12.580374  184373 system_pods.go:61] "kube-proxy-b7d5h" [8ece550d-eacc-4c08-8445-2cf7769e2988] Running
	I1124 14:16:12.580381  184373 system_pods.go:61] "kube-scheduler-old-k8s-version-706771" [c7923b32-5218-4bd1-ac2e-cc167172ca87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:16:12.580389  184373 system_pods.go:61] "storage-provisioner" [3ca62966-d0c0-4e3f-8e48-8afb3c015191] Running
	I1124 14:16:12.580395  184373 system_pods.go:74] duration metric: took 3.808215ms to wait for pod list to return data ...
	I1124 14:16:12.580402  184373 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:16:12.582790  184373 default_sa.go:45] found service account: "default"
	I1124 14:16:12.582816  184373 default_sa.go:55] duration metric: took 2.406813ms for default service account to be created ...
	I1124 14:16:12.582826  184373 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:16:12.586504  184373 system_pods.go:86] 8 kube-system pods found
	I1124 14:16:12.586536  184373 system_pods.go:89] "coredns-5dd5756b68-znmnc" [b0bcf3de-2ab4-48ba-b370-da2bf423cfdd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:16:12.586545  184373 system_pods.go:89] "etcd-old-k8s-version-706771" [7e887a47-501e-44b0-951b-4a6810aabf89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:16:12.586551  184373 system_pods.go:89] "kindnet-95mv4" [fb45bd9c-2a20-424a-83a6-fa60017b50ac] Running
	I1124 14:16:12.586560  184373 system_pods.go:89] "kube-apiserver-old-k8s-version-706771" [a0e63866-ac6c-4a7f-9731-73fbbd6e45b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:16:12.586566  184373 system_pods.go:89] "kube-controller-manager-old-k8s-version-706771" [0c324973-9b9a-4122-b54a-383ef4eeb449] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:16:12.586573  184373 system_pods.go:89] "kube-proxy-b7d5h" [8ece550d-eacc-4c08-8445-2cf7769e2988] Running
	I1124 14:16:12.586584  184373 system_pods.go:89] "kube-scheduler-old-k8s-version-706771" [c7923b32-5218-4bd1-ac2e-cc167172ca87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:16:12.586591  184373 system_pods.go:89] "storage-provisioner" [3ca62966-d0c0-4e3f-8e48-8afb3c015191] Running
	I1124 14:16:12.586598  184373 system_pods.go:126] duration metric: took 3.76691ms to wait for k8s-apps to be running ...
	I1124 14:16:12.586610  184373 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:16:12.586671  184373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:16:12.599965  184373 system_svc.go:56] duration metric: took 13.345461ms WaitForService to wait for kubelet
	I1124 14:16:12.600040  184373 kubeadm.go:587] duration metric: took 7.519998181s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:16:12.600072  184373 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:16:12.602995  184373 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:16:12.603072  184373 node_conditions.go:123] node cpu capacity is 2
	I1124 14:16:12.603102  184373 node_conditions.go:105] duration metric: took 3.006318ms to run NodePressure ...
	I1124 14:16:12.603129  184373 start.go:242] waiting for startup goroutines ...
	I1124 14:16:12.603168  184373 start.go:247] waiting for cluster config update ...
	I1124 14:16:12.603194  184373 start.go:256] writing updated cluster config ...
	I1124 14:16:12.603589  184373 ssh_runner.go:195] Run: rm -f paused
	I1124 14:16:12.607150  184373 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:16:12.611845  184373 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-znmnc" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 14:16:14.617399  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:16.617855  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:19.117156  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:21.117805  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:23.118188  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:25.118258  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:27.118566  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:29.124874  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:31.617967  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:34.118012  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:36.118115  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:38.617820  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	W1124 14:16:41.117974  184373 pod_ready.go:104] pod "coredns-5dd5756b68-znmnc" is not "Ready", error: <nil>
	I1124 14:16:43.117488  184373 pod_ready.go:94] pod "coredns-5dd5756b68-znmnc" is "Ready"
	I1124 14:16:43.117517  184373 pod_ready.go:86] duration metric: took 30.505641767s for pod "coredns-5dd5756b68-znmnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:16:43.120570  184373 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:16:43.125856  184373 pod_ready.go:94] pod "etcd-old-k8s-version-706771" is "Ready"
	I1124 14:16:43.125884  184373 pod_ready.go:86] duration metric: took 5.286311ms for pod "etcd-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:16:43.128977  184373 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:16:43.134168  184373 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-706771" is "Ready"
	I1124 14:16:43.134198  184373 pod_ready.go:86] duration metric: took 5.19314ms for pod "kube-apiserver-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:16:43.137408  184373 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:16:43.315231  184373 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-706771" is "Ready"
	I1124 14:16:43.315259  184373 pod_ready.go:86] duration metric: took 177.826318ms for pod "kube-controller-manager-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:16:43.516257  184373 pod_ready.go:83] waiting for pod "kube-proxy-b7d5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:16:43.914833  184373 pod_ready.go:94] pod "kube-proxy-b7d5h" is "Ready"
	I1124 14:16:43.914861  184373 pod_ready.go:86] duration metric: took 398.572722ms for pod "kube-proxy-b7d5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:16:44.115975  184373 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:16:44.515263  184373 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-706771" is "Ready"
	I1124 14:16:44.515302  184373 pod_ready.go:86] duration metric: took 399.29771ms for pod "kube-scheduler-old-k8s-version-706771" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:16:44.515315  184373 pod_ready.go:40] duration metric: took 31.90809121s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:16:44.587669  184373 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1124 14:16:44.589365  184373 out.go:203] 
	W1124 14:16:44.590559  184373 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 14:16:44.591786  184373 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 14:16:44.592802  184373 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-706771" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.042752299Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b3ceea7e-8275-4f3d-b4d8-742d0f73493a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.044003981Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5aab024a-be03-4ebf-bb60-5c0f6ee65f15 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.045189528Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84/dashboard-metrics-scraper" id=9b082b1e-c152-47c6-bc75-944971c3002b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.045464756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.053095002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.054026893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.069340892Z" level=info msg="Created container 943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84/dashboard-metrics-scraper" id=9b082b1e-c152-47c6-bc75-944971c3002b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.07203702Z" level=info msg="Starting container: 943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6" id=63c61d93-baee-4add-9da2-f55eb3c6b4f9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.07425581Z" level=info msg="Started container" PID=1641 containerID=943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84/dashboard-metrics-scraper id=63c61d93-baee-4add-9da2-f55eb3c6b4f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e6155867cf0eea6395d290dc3654b576c763ea67d4313e21a26f52430a9b11c
	Nov 24 14:16:44 old-k8s-version-706771 conmon[1639]: conmon 943b2469fd266d2e12ec <ninfo>: container 1641 exited with status 1
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.560348259Z" level=info msg="Removing container: aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759" id=c40b91fc-20b1-4982-817a-cee15956ec7f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.572412195Z" level=info msg="Error loading conmon cgroup of container aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759: cgroup deleted" id=c40b91fc-20b1-4982-817a-cee15956ec7f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.575929617Z" level=info msg="Removed container aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84/dashboard-metrics-scraper" id=c40b91fc-20b1-4982-817a-cee15956ec7f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.271802882Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.280343275Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.280381175Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.280426845Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.286574261Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.286611759Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.286636169Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.291126672Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.291299015Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.291461256Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.295283936Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.295314452Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	943b2469fd266       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   6e6155867cf0e       dashboard-metrics-scraper-5f989dc9cf-lbj84       kubernetes-dashboard
	fd07626170d21       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   20cd065304099       storage-provisioner                              kube-system
	7cacf35d21de9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   9e90b372dadb1       kubernetes-dashboard-8694d4445c-54hbr            kubernetes-dashboard
	52e700d0e4231       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   4fff7d3fbf179       busybox                                          default
	9169afa245f15       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           48 seconds ago      Running             coredns                     1                   9525f05113187       coredns-5dd5756b68-znmnc                         kube-system
	662792c1d19f8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   edcbf6e42dd29       kindnet-95mv4                                    kube-system
	6bfac4080fd66       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           48 seconds ago      Running             kube-proxy                  1                   b4eae84d07149       kube-proxy-b7d5h                                 kube-system
	3f6d39d6f582e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           48 seconds ago      Exited              storage-provisioner         1                   20cd065304099       storage-provisioner                              kube-system
	4462433cacd7f       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           54 seconds ago      Running             kube-scheduler              1                   bae09cfa68755       kube-scheduler-old-k8s-version-706771            kube-system
	d2ae3d6088d6c       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           54 seconds ago      Running             etcd                        1                   93911d0778261       etcd-old-k8s-version-706771                      kube-system
	39e49abd24663       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           54 seconds ago      Running             kube-apiserver              1                   1208c148bac04       kube-apiserver-old-k8s-version-706771            kube-system
	065e223559053       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           54 seconds ago      Running             kube-controller-manager     1                   51ce8e4b4aa1e       kube-controller-manager-old-k8s-version-706771   kube-system
	
	
	==> coredns [9169afa245f154ace8b11f56579ef064687ab842b2b449fc8e57beaf11efb2fc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49040 - 58573 "HINFO IN 7364402776033981871.1963845384173948205. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010621788s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-706771
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-706771
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=old-k8s-version-706771
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_15_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:14:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-706771
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:16:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:16:41 +0000   Mon, 24 Nov 2025 14:14:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:16:41 +0000   Mon, 24 Nov 2025 14:14:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:16:41 +0000   Mon, 24 Nov 2025 14:14:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:16:41 +0000   Mon, 24 Nov 2025 14:15:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-706771
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                14492c3a-8806-4276-8078-fdf3e23d5fc8
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-znmnc                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-old-k8s-version-706771                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-95mv4                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-706771             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-706771    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-b7d5h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-706771             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-lbj84        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-54hbr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-706771 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node old-k8s-version-706771 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node old-k8s-version-706771 event: Registered Node old-k8s-version-706771 in Controller
	  Normal  NodeReady                91s                  kubelet          Node old-k8s-version-706771 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node old-k8s-version-706771 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                  node-controller  Node old-k8s-version-706771 event: Registered Node old-k8s-version-706771 in Controller
	
	
	==> dmesg <==
	[Nov24 13:46] overlayfs: idmapped layers are currently not supported
	[Nov24 13:52] overlayfs: idmapped layers are currently not supported
	[ +31.432146] overlayfs: idmapped layers are currently not supported
	[Nov24 13:53] overlayfs: idmapped layers are currently not supported
	[Nov24 13:54] overlayfs: idmapped layers are currently not supported
	[Nov24 13:56] overlayfs: idmapped layers are currently not supported
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d2ae3d6088d6c2a4d1067c258f51a7ecb4899a0bf6d5568e6df682d6446ca5c7] <==
	{"level":"info","ts":"2025-11-24T14:16:05.435519Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T14:16:05.435661Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-11-24T14:16:05.439312Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T14:16:05.439891Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T14:16:05.439969Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T14:16:05.443289Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T14:16:05.443474Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T14:16:05.44412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-24T14:16:05.444342Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-24T14:16:05.448157Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:16:05.448211Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:16:07.067409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-24T14:16:07.067518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-24T14:16:07.067572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-24T14:16:07.067616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-24T14:16:07.067656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-24T14:16:07.067696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-24T14:16:07.067726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-24T14:16:07.071565Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-706771 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T14:16:07.071778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:16:07.072793Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-24T14:16:07.072909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:16:07.077066Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T14:16:07.084726Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T14:16:07.084831Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:16:59 up  1:59,  0 user,  load average: 1.72, 2.51, 2.27
	Linux old-k8s-version-706771 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [662792c1d19f88cbc7c05bc4aa78cf420616277ed272fd2a62c1fd7eea280ac9] <==
	I1124 14:16:11.039698       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:16:11.040093       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:16:11.112943       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:16:11.113083       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:16:11.113127       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:16:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:16:11.313982       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:16:11.314011       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:16:11.314021       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:16:11.314416       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:16:41.265594       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:16:41.314288       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:16:41.314306       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:16:41.315336       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:16:42.814109       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:16:42.814144       1 metrics.go:72] Registering metrics
	I1124 14:16:42.814215       1 controller.go:711] "Syncing nftables rules"
	I1124 14:16:51.270811       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:16:51.270858       1 main.go:301] handling current node
	
	
	==> kube-apiserver [39e49abd246636c62439ed76a4481780faa5610ca54df7818f5d5ec1656a3fc6] <==
	I1124 14:16:10.331341       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 14:16:10.331823       1 aggregator.go:166] initial CRD sync complete...
	I1124 14:16:10.331849       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 14:16:10.331864       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:16:10.331872       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:16:10.338464       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:16:10.338661       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 14:16:10.339180       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 14:16:10.339199       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 14:16:10.344088       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 14:16:10.370761       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1124 14:16:10.372106       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 14:16:10.413596       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1124 14:16:10.471491       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:16:11.004350       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:16:12.388136       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 14:16:12.434478       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 14:16:12.460834       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:16:12.470301       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:16:12.482009       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 14:16:12.535262       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.59.133"}
	I1124 14:16:12.558388       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.105.60"}
	I1124 14:16:23.665552       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 14:16:23.781193       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:16:23.816991       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [065e2235590533b061f78ed69d188fc0f922c3d8f7c9e6624ca6b074b6ff8055] <==
	I1124 14:16:23.680720       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1124 14:16:23.713006       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-54hbr"
	I1124 14:16:23.713129       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-lbj84"
	I1124 14:16:23.731163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.962743ms"
	I1124 14:16:23.746915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.075278ms"
	I1124 14:16:23.753803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.586548ms"
	I1124 14:16:23.753946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.429µs"
	I1124 14:16:23.761001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.940979ms"
	I1124 14:16:23.761129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.903µs"
	I1124 14:16:23.771474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.867µs"
	I1124 14:16:23.790798       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.712µs"
	I1124 14:16:23.828736       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 14:16:23.828766       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 14:16:23.831588       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1124 14:16:23.833356       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1124 14:16:23.874991       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 14:16:29.534571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.925758ms"
	I1124 14:16:29.534878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="127.763µs"
	I1124 14:16:33.537170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.652µs"
	I1124 14:16:34.537270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.229µs"
	I1124 14:16:35.534197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.159µs"
	I1124 14:16:42.928327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.743796ms"
	I1124 14:16:42.929475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.238µs"
	I1124 14:16:44.570918       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.433µs"
	I1124 14:16:54.056564       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.467µs"
	
	
	==> kube-proxy [6bfac4080fd664cdaea1a7c4e8cdce7bf4757d53582813085720bab8e65f5a85] <==
	I1124 14:16:11.610695       1 server_others.go:69] "Using iptables proxy"
	I1124 14:16:11.673100       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1124 14:16:11.776415       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:16:11.779799       1 server_others.go:152] "Using iptables Proxier"
	I1124 14:16:11.779848       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 14:16:11.779857       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 14:16:11.779881       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 14:16:11.780111       1 server.go:846] "Version info" version="v1.28.0"
	I1124 14:16:11.780121       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:16:11.787257       1 config.go:188] "Starting service config controller"
	I1124 14:16:11.787279       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 14:16:11.787295       1 config.go:97] "Starting endpoint slice config controller"
	I1124 14:16:11.787300       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 14:16:11.789087       1 config.go:315] "Starting node config controller"
	I1124 14:16:11.789111       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 14:16:11.891429       1 shared_informer.go:318] Caches are synced for node config
	I1124 14:16:11.891470       1 shared_informer.go:318] Caches are synced for service config
	I1124 14:16:11.891503       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4462433cacd7fb9b40a0bc0ba0ab22736d2abbc9e6137cbb6e9ad29470cd488b] <==
	I1124 14:16:08.310885       1 serving.go:348] Generated self-signed cert in-memory
	I1124 14:16:11.341114       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1124 14:16:11.341380       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:16:11.359482       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1124 14:16:11.360353       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1124 14:16:11.360475       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:16:11.360524       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 14:16:11.360900       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1124 14:16:11.360927       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1124 14:16:11.360955       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:16:11.368161       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1124 14:16:11.462387       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1124 14:16:11.466641       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 14:16:11.469710       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Nov 24 14:16:23 old-k8s-version-706771 kubelet[784]: I1124 14:16:23.734022     784 topology_manager.go:215] "Topology Admit Handler" podUID="b1e8e1bb-4464-4324-905e-cd86fdac794e" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-lbj84"
	Nov 24 14:16:23 old-k8s-version-706771 kubelet[784]: I1124 14:16:23.859767     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmf8\" (UniqueName: \"kubernetes.io/projected/72a01016-3826-4d99-9b43-7c88b607e64f-kube-api-access-5pmf8\") pod \"kubernetes-dashboard-8694d4445c-54hbr\" (UID: \"72a01016-3826-4d99-9b43-7c88b607e64f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54hbr"
	Nov 24 14:16:23 old-k8s-version-706771 kubelet[784]: I1124 14:16:23.859827     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b1e8e1bb-4464-4324-905e-cd86fdac794e-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-lbj84\" (UID: \"b1e8e1bb-4464-4324-905e-cd86fdac794e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84"
	Nov 24 14:16:23 old-k8s-version-706771 kubelet[784]: I1124 14:16:23.859856     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/72a01016-3826-4d99-9b43-7c88b607e64f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-54hbr\" (UID: \"72a01016-3826-4d99-9b43-7c88b607e64f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54hbr"
	Nov 24 14:16:23 old-k8s-version-706771 kubelet[784]: I1124 14:16:23.859883     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsbmk\" (UniqueName: \"kubernetes.io/projected/b1e8e1bb-4464-4324-905e-cd86fdac794e-kube-api-access-wsbmk\") pod \"dashboard-metrics-scraper-5f989dc9cf-lbj84\" (UID: \"b1e8e1bb-4464-4324-905e-cd86fdac794e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84"
	Nov 24 14:16:24 old-k8s-version-706771 kubelet[784]: W1124 14:16:24.056982     784 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/crio-9e90b372dadb18b384c7a861d678fbb49e7cc4ee6d9c0587b6cad81a80a4d40e WatchSource:0}: Error finding container 9e90b372dadb18b384c7a861d678fbb49e7cc4ee6d9c0587b6cad81a80a4d40e: Status 404 returned error can't find the container with id 9e90b372dadb18b384c7a861d678fbb49e7cc4ee6d9c0587b6cad81a80a4d40e
	Nov 24 14:16:24 old-k8s-version-706771 kubelet[784]: W1124 14:16:24.076621     784 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/crio-6e6155867cf0eea6395d290dc3654b576c763ea67d4313e21a26f52430a9b11c WatchSource:0}: Error finding container 6e6155867cf0eea6395d290dc3654b576c763ea67d4313e21a26f52430a9b11c: Status 404 returned error can't find the container with id 6e6155867cf0eea6395d290dc3654b576c763ea67d4313e21a26f52430a9b11c
	Nov 24 14:16:33 old-k8s-version-706771 kubelet[784]: I1124 14:16:33.513415     784 scope.go:117] "RemoveContainer" containerID="b68615eb19f6fad93918a4d6d1d139f507dac74bb2277d72ce1ed87c49ad2d78"
	Nov 24 14:16:33 old-k8s-version-706771 kubelet[784]: I1124 14:16:33.532193     784 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54hbr" podStartSLOduration=5.833518348 podCreationTimestamp="2025-11-24 14:16:23 +0000 UTC" firstStartedPulling="2025-11-24 14:16:24.062191268 +0000 UTC m=+19.898572927" lastFinishedPulling="2025-11-24 14:16:28.760795979 +0000 UTC m=+24.597177695" observedRunningTime="2025-11-24 14:16:29.520903158 +0000 UTC m=+25.357284825" watchObservedRunningTime="2025-11-24 14:16:33.532123116 +0000 UTC m=+29.368504775"
	Nov 24 14:16:34 old-k8s-version-706771 kubelet[784]: I1124 14:16:34.518334     784 scope.go:117] "RemoveContainer" containerID="aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759"
	Nov 24 14:16:34 old-k8s-version-706771 kubelet[784]: E1124 14:16:34.519102     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbj84_kubernetes-dashboard(b1e8e1bb-4464-4324-905e-cd86fdac794e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84" podUID="b1e8e1bb-4464-4324-905e-cd86fdac794e"
	Nov 24 14:16:34 old-k8s-version-706771 kubelet[784]: I1124 14:16:34.519469     784 scope.go:117] "RemoveContainer" containerID="b68615eb19f6fad93918a4d6d1d139f507dac74bb2277d72ce1ed87c49ad2d78"
	Nov 24 14:16:35 old-k8s-version-706771 kubelet[784]: I1124 14:16:35.521246     784 scope.go:117] "RemoveContainer" containerID="aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759"
	Nov 24 14:16:35 old-k8s-version-706771 kubelet[784]: E1124 14:16:35.521541     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbj84_kubernetes-dashboard(b1e8e1bb-4464-4324-905e-cd86fdac794e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84" podUID="b1e8e1bb-4464-4324-905e-cd86fdac794e"
	Nov 24 14:16:41 old-k8s-version-706771 kubelet[784]: I1124 14:16:41.536665     784 scope.go:117] "RemoveContainer" containerID="3f6d39d6f582e8f8f5d54e4b32d73192d578d30ce39530503930d8ec0e325ccd"
	Nov 24 14:16:44 old-k8s-version-706771 kubelet[784]: I1124 14:16:44.042139     784 scope.go:117] "RemoveContainer" containerID="aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759"
	Nov 24 14:16:44 old-k8s-version-706771 kubelet[784]: I1124 14:16:44.550703     784 scope.go:117] "RemoveContainer" containerID="aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759"
	Nov 24 14:16:44 old-k8s-version-706771 kubelet[784]: I1124 14:16:44.551171     784 scope.go:117] "RemoveContainer" containerID="943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6"
	Nov 24 14:16:44 old-k8s-version-706771 kubelet[784]: E1124 14:16:44.551619     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbj84_kubernetes-dashboard(b1e8e1bb-4464-4324-905e-cd86fdac794e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84" podUID="b1e8e1bb-4464-4324-905e-cd86fdac794e"
	Nov 24 14:16:54 old-k8s-version-706771 kubelet[784]: I1124 14:16:54.042235     784 scope.go:117] "RemoveContainer" containerID="943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6"
	Nov 24 14:16:54 old-k8s-version-706771 kubelet[784]: E1124 14:16:54.042552     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbj84_kubernetes-dashboard(b1e8e1bb-4464-4324-905e-cd86fdac794e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84" podUID="b1e8e1bb-4464-4324-905e-cd86fdac794e"
	Nov 24 14:16:56 old-k8s-version-706771 kubelet[784]: I1124 14:16:56.926290     784 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 14:16:56 old-k8s-version-706771 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:16:56 old-k8s-version-706771 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:16:56 old-k8s-version-706771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7cacf35d21de93fc134f70c789842b0bb01f94a9f81d988ca016c0707c19e476] <==
	2025/11/24 14:16:28 Using namespace: kubernetes-dashboard
	2025/11/24 14:16:28 Using in-cluster config to connect to apiserver
	2025/11/24 14:16:28 Using secret token for csrf signing
	2025/11/24 14:16:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:16:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:16:28 Successful initial request to the apiserver, version: v1.28.0
	2025/11/24 14:16:28 Generating JWE encryption key
	2025/11/24 14:16:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:16:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:16:29 Initializing JWE encryption key from synchronized object
	2025/11/24 14:16:29 Creating in-cluster Sidecar client
	2025/11/24 14:16:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:16:29 Serving insecurely on HTTP port: 9090
	2025/11/24 14:16:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:16:28 Starting overwatch
	
	
	==> storage-provisioner [3f6d39d6f582e8f8f5d54e4b32d73192d578d30ce39530503930d8ec0e325ccd] <==
	I1124 14:16:11.383513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:16:41.393499       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fd07626170d21832af144510fb26073eb6f20f7c7dcca410390fc64f77e7f864] <==
	I1124 14:16:41.593500       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:16:41.606382       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:16:41.606432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 14:16:59.015904       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:16:59.016118       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706771_6bb6e9c0-2467-41b0-8161-cacc4f40415b!
	I1124 14:16:59.018874       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"83d8df4a-4d76-4886-b058-96eaa24ce4dc", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-706771_6bb6e9c0-2467-41b0-8161-cacc4f40415b became leader
	I1124 14:16:59.117240       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706771_6bb6e9c0-2467-41b0-8161-cacc4f40415b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-706771 -n old-k8s-version-706771
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-706771 -n old-k8s-version-706771: exit status 2 (684.291257ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-706771 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-706771
helpers_test.go:243: (dbg) docker inspect old-k8s-version-706771:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5",
	        "Created": "2025-11-24T14:14:37.23388933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 184499,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:15:57.514328709Z",
	            "FinishedAt": "2025-11-24T14:15:56.67845261Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/hosts",
	        "LogPath": "/var/lib/docker/containers/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5-json.log",
	        "Name": "/old-k8s-version-706771",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-706771:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-706771",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5",
	                "LowerDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846/merged",
	                "UpperDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846/diff",
	                "WorkDir": "/var/lib/docker/overlay2/579064394e8bd6cc39cd24d2c9fba4cd60161e321fb2311b577ba2e021b4a846/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-706771",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-706771/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-706771",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-706771",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-706771",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ef8307883d1a7d1659ce14cdd9da7911669f832792e2815eb8e829415ea70fdf",
	            "SandboxKey": "/var/run/docker/netns/ef8307883d1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-706771": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:57:f6:4e:48:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e96418466e2f103f798236cd2dcf5c79e483562bd7b0670ad5747c94e35ac056",
	                    "EndpointID": "063fbd8f0cf65bfa5719910692cc6bac45cf7f022eb7b8a64df536e2b7ea6e82",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-706771",
	                        "2c35ba6c5942"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706771 -n old-k8s-version-706771
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706771 -n old-k8s-version-706771: exit status 2 (603.073484ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-706771 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-706771 logs -n 25: (1.964406365s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-626991 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo containerd config dump                                                                                                                                                                                                  │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ ssh     │ -p cilium-626991 sudo crio config                                                                                                                                                                                                             │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ delete  │ -p cilium-626991                                                                                                                                                                                                                              │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p force-systemd-env-289577 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-289577  │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ ssh     │ force-systemd-flag-928059 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-928059 │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ delete  │ -p force-systemd-flag-928059                                                                                                                                                                                                                  │ force-systemd-flag-928059 │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-032076    │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p force-systemd-env-289577                                                                                                                                                                                                                   │ force-systemd-env-289577  │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p cert-options-097221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:14 UTC │
	│ ssh     │ cert-options-097221 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ ssh     │ -p cert-options-097221 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p cert-options-097221                                                                                                                                                                                                                        │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │                     │
	│ stop    │ -p old-k8s-version-706771 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:16 UTC │
	│ image   │ old-k8s-version-706771 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │ 24 Nov 25 14:16 UTC │
	│ pause   │ -p old-k8s-version-706771 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │                     │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-032076    │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:17:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:17:00.866915  187190 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:17:00.867037  187190 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:17:00.867041  187190 out.go:374] Setting ErrFile to fd 2...
	I1124 14:17:00.867044  187190 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:17:00.867318  187190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:17:00.870427  187190 out.go:368] Setting JSON to false
	I1124 14:17:00.875659  187190 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7172,"bootTime":1763986649,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:17:00.875745  187190 start.go:143] virtualization:  
	I1124 14:17:00.883519  187190 out.go:179] * [cert-expiration-032076] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:17:00.886727  187190 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:17:00.886841  187190 notify.go:221] Checking for updates...
	I1124 14:17:00.892634  187190 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:17:00.897686  187190 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:17:00.900732  187190 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:17:00.903789  187190 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:17:00.906789  187190 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:17:00.910240  187190 config.go:182] Loaded profile config "cert-expiration-032076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:17:00.910898  187190 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:17:00.945623  187190 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:17:00.945741  187190 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:17:01.069869  187190 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 14:17:01.056446824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:17:01.070028  187190 docker.go:319] overlay module found
	I1124 14:17:01.073149  187190 out.go:179] * Using the docker driver based on existing profile
	I1124 14:17:01.076178  187190 start.go:309] selected driver: docker
	I1124 14:17:01.076191  187190 start.go:927] validating driver "docker" against &{Name:cert-expiration-032076 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-032076 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:17:01.076612  187190 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:17:01.077694  187190 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:17:01.206810  187190 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 14:17:01.189609756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:17:01.207231  187190 cni.go:84] Creating CNI manager for ""
	I1124 14:17:01.207300  187190 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:17:01.207342  187190 start.go:353] cluster config:
	{Name:cert-expiration-032076 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-032076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1124 14:17:01.210809  187190 out.go:179] * Starting "cert-expiration-032076" primary control-plane node in "cert-expiration-032076" cluster
	I1124 14:17:01.215908  187190 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:17:01.223717  187190 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:17:01.229951  187190 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:17:01.229994  187190 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:17:01.230005  187190 cache.go:65] Caching tarball of preloaded images
	I1124 14:17:01.230116  187190 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:17:01.230124  187190 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:17:01.230251  187190 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/cert-expiration-032076/config.json ...
	I1124 14:17:01.230516  187190 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:17:01.262049  187190 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:17:01.262062  187190 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:17:01.262076  187190 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:17:01.262110  187190 start.go:360] acquireMachinesLock for cert-expiration-032076: {Name:mkcbca634d7c259216be0b8eb294898994068d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:17:01.262173  187190 start.go:364] duration metric: took 43.668µs to acquireMachinesLock for "cert-expiration-032076"
	I1124 14:17:01.262194  187190 start.go:96] Skipping create...Using existing machine configuration
	I1124 14:17:01.262199  187190 fix.go:54] fixHost starting: 
	I1124 14:17:01.262524  187190 cli_runner.go:164] Run: docker container inspect cert-expiration-032076 --format={{.State.Status}}
	I1124 14:17:01.292888  187190 fix.go:112] recreateIfNeeded on cert-expiration-032076: state=Running err=<nil>
	W1124 14:17:01.292911  187190 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.042752299Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b3ceea7e-8275-4f3d-b4d8-742d0f73493a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.044003981Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5aab024a-be03-4ebf-bb60-5c0f6ee65f15 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.045189528Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84/dashboard-metrics-scraper" id=9b082b1e-c152-47c6-bc75-944971c3002b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.045464756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.053095002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.054026893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.069340892Z" level=info msg="Created container 943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84/dashboard-metrics-scraper" id=9b082b1e-c152-47c6-bc75-944971c3002b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.07203702Z" level=info msg="Starting container: 943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6" id=63c61d93-baee-4add-9da2-f55eb3c6b4f9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.07425581Z" level=info msg="Started container" PID=1641 containerID=943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84/dashboard-metrics-scraper id=63c61d93-baee-4add-9da2-f55eb3c6b4f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e6155867cf0eea6395d290dc3654b576c763ea67d4313e21a26f52430a9b11c
	Nov 24 14:16:44 old-k8s-version-706771 conmon[1639]: conmon 943b2469fd266d2e12ec <ninfo>: container 1641 exited with status 1
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.560348259Z" level=info msg="Removing container: aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759" id=c40b91fc-20b1-4982-817a-cee15956ec7f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.572412195Z" level=info msg="Error loading conmon cgroup of container aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759: cgroup deleted" id=c40b91fc-20b1-4982-817a-cee15956ec7f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:16:44 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:44.575929617Z" level=info msg="Removed container aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84/dashboard-metrics-scraper" id=c40b91fc-20b1-4982-817a-cee15956ec7f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.271802882Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.280343275Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.280381175Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.280426845Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.286574261Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.286611759Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.286636169Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.291126672Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.291299015Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.291461256Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.295283936Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:16:51 old-k8s-version-706771 crio[656]: time="2025-11-24T14:16:51.295314452Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	943b2469fd266       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   6e6155867cf0e       dashboard-metrics-scraper-5f989dc9cf-lbj84       kubernetes-dashboard
	fd07626170d21       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   20cd065304099       storage-provisioner                              kube-system
	7cacf35d21de9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago      Running             kubernetes-dashboard        0                   9e90b372dadb1       kubernetes-dashboard-8694d4445c-54hbr            kubernetes-dashboard
	52e700d0e4231       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   4fff7d3fbf179       busybox                                          default
	9169afa245f15       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   9525f05113187       coredns-5dd5756b68-znmnc                         kube-system
	662792c1d19f8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   edcbf6e42dd29       kindnet-95mv4                                    kube-system
	6bfac4080fd66       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   b4eae84d07149       kube-proxy-b7d5h                                 kube-system
	3f6d39d6f582e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   20cd065304099       storage-provisioner                              kube-system
	4462433cacd7f       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           57 seconds ago      Running             kube-scheduler              1                   bae09cfa68755       kube-scheduler-old-k8s-version-706771            kube-system
	d2ae3d6088d6c       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           57 seconds ago      Running             etcd                        1                   93911d0778261       etcd-old-k8s-version-706771                      kube-system
	39e49abd24663       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           57 seconds ago      Running             kube-apiserver              1                   1208c148bac04       kube-apiserver-old-k8s-version-706771            kube-system
	065e223559053       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           57 seconds ago      Running             kube-controller-manager     1                   51ce8e4b4aa1e       kube-controller-manager-old-k8s-version-706771   kube-system
	
	
	==> coredns [9169afa245f154ace8b11f56579ef064687ab842b2b449fc8e57beaf11efb2fc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49040 - 58573 "HINFO IN 7364402776033981871.1963845384173948205. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010621788s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-706771
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-706771
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=old-k8s-version-706771
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_15_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:14:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-706771
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:16:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:16:41 +0000   Mon, 24 Nov 2025 14:14:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:16:41 +0000   Mon, 24 Nov 2025 14:14:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:16:41 +0000   Mon, 24 Nov 2025 14:14:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:16:41 +0000   Mon, 24 Nov 2025 14:15:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-706771
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                14492c3a-8806-4276-8078-fdf3e23d5fc8
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-znmnc                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-706771                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-95mv4                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-706771             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-706771    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-b7d5h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-706771             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-lbj84        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-54hbr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-706771 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-706771 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-706771 event: Registered Node old-k8s-version-706771 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-706771 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node old-k8s-version-706771 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node old-k8s-version-706771 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-706771 event: Registered Node old-k8s-version-706771 in Controller
	
	
	==> dmesg <==
	[Nov24 13:46] overlayfs: idmapped layers are currently not supported
	[Nov24 13:52] overlayfs: idmapped layers are currently not supported
	[ +31.432146] overlayfs: idmapped layers are currently not supported
	[Nov24 13:53] overlayfs: idmapped layers are currently not supported
	[Nov24 13:54] overlayfs: idmapped layers are currently not supported
	[Nov24 13:56] overlayfs: idmapped layers are currently not supported
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d2ae3d6088d6c2a4d1067c258f51a7ecb4899a0bf6d5568e6df682d6446ca5c7] <==
	{"level":"info","ts":"2025-11-24T14:16:05.435519Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T14:16:05.435661Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-11-24T14:16:05.439312Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T14:16:05.439891Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T14:16:05.439969Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-24T14:16:05.443289Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T14:16:05.443474Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-24T14:16:05.44412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-24T14:16:05.444342Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-24T14:16:05.448157Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:16:05.448211Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:16:07.067409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-24T14:16:07.067518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-24T14:16:07.067572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-24T14:16:07.067616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-24T14:16:07.067656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-24T14:16:07.067696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-24T14:16:07.067726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-24T14:16:07.071565Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-706771 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T14:16:07.071778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:16:07.072793Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-24T14:16:07.072909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:16:07.077066Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T14:16:07.084726Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T14:16:07.084831Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:17:03 up  1:59,  0 user,  load average: 1.74, 2.50, 2.27
	Linux old-k8s-version-706771 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [662792c1d19f88cbc7c05bc4aa78cf420616277ed272fd2a62c1fd7eea280ac9] <==
	I1124 14:16:11.039698       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:16:11.040093       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:16:11.112943       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:16:11.113083       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:16:11.113127       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:16:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:16:11.313982       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:16:11.314011       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:16:11.314021       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:16:11.314416       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:16:41.265594       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:16:41.314288       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:16:41.314306       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:16:41.315336       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:16:42.814109       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:16:42.814144       1 metrics.go:72] Registering metrics
	I1124 14:16:42.814215       1 controller.go:711] "Syncing nftables rules"
	I1124 14:16:51.270811       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:16:51.270858       1 main.go:301] handling current node
	I1124 14:17:01.270486       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:17:01.270533       1 main.go:301] handling current node
	
	
	==> kube-apiserver [39e49abd246636c62439ed76a4481780faa5610ca54df7818f5d5ec1656a3fc6] <==
	I1124 14:16:10.331341       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 14:16:10.331823       1 aggregator.go:166] initial CRD sync complete...
	I1124 14:16:10.331849       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 14:16:10.331864       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:16:10.331872       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:16:10.338464       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:16:10.338661       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 14:16:10.339180       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 14:16:10.339199       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 14:16:10.344088       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 14:16:10.370761       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1124 14:16:10.372106       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 14:16:10.413596       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1124 14:16:10.471491       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:16:11.004350       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:16:12.388136       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 14:16:12.434478       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 14:16:12.460834       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:16:12.470301       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:16:12.482009       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 14:16:12.535262       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.59.133"}
	I1124 14:16:12.558388       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.105.60"}
	I1124 14:16:23.665552       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 14:16:23.781193       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:16:23.816991       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [065e2235590533b061f78ed69d188fc0f922c3d8f7c9e6624ca6b074b6ff8055] <==
	I1124 14:16:23.680720       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1124 14:16:23.713006       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-54hbr"
	I1124 14:16:23.713129       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-lbj84"
	I1124 14:16:23.731163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.962743ms"
	I1124 14:16:23.746915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.075278ms"
	I1124 14:16:23.753803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.586548ms"
	I1124 14:16:23.753946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.429µs"
	I1124 14:16:23.761001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.940979ms"
	I1124 14:16:23.761129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.903µs"
	I1124 14:16:23.771474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.867µs"
	I1124 14:16:23.790798       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.712µs"
	I1124 14:16:23.828736       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 14:16:23.828766       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 14:16:23.831588       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1124 14:16:23.833356       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1124 14:16:23.874991       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 14:16:29.534571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.925758ms"
	I1124 14:16:29.534878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="127.763µs"
	I1124 14:16:33.537170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.652µs"
	I1124 14:16:34.537270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.229µs"
	I1124 14:16:35.534197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.159µs"
	I1124 14:16:42.928327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.743796ms"
	I1124 14:16:42.929475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.238µs"
	I1124 14:16:44.570918       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.433µs"
	I1124 14:16:54.056564       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.467µs"
	
	
	==> kube-proxy [6bfac4080fd664cdaea1a7c4e8cdce7bf4757d53582813085720bab8e65f5a85] <==
	I1124 14:16:11.610695       1 server_others.go:69] "Using iptables proxy"
	I1124 14:16:11.673100       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1124 14:16:11.776415       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:16:11.779799       1 server_others.go:152] "Using iptables Proxier"
	I1124 14:16:11.779848       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 14:16:11.779857       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 14:16:11.779881       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 14:16:11.780111       1 server.go:846] "Version info" version="v1.28.0"
	I1124 14:16:11.780121       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:16:11.787257       1 config.go:188] "Starting service config controller"
	I1124 14:16:11.787279       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 14:16:11.787295       1 config.go:97] "Starting endpoint slice config controller"
	I1124 14:16:11.787300       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 14:16:11.789087       1 config.go:315] "Starting node config controller"
	I1124 14:16:11.789111       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 14:16:11.891429       1 shared_informer.go:318] Caches are synced for node config
	I1124 14:16:11.891470       1 shared_informer.go:318] Caches are synced for service config
	I1124 14:16:11.891503       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4462433cacd7fb9b40a0bc0ba0ab22736d2abbc9e6137cbb6e9ad29470cd488b] <==
	I1124 14:16:08.310885       1 serving.go:348] Generated self-signed cert in-memory
	I1124 14:16:11.341114       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1124 14:16:11.341380       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:16:11.359482       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1124 14:16:11.360353       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1124 14:16:11.360475       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:16:11.360524       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 14:16:11.360900       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1124 14:16:11.360927       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1124 14:16:11.360955       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:16:11.368161       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1124 14:16:11.462387       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1124 14:16:11.466641       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 14:16:11.469710       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Nov 24 14:16:23 old-k8s-version-706771 kubelet[784]: I1124 14:16:23.734022     784 topology_manager.go:215] "Topology Admit Handler" podUID="b1e8e1bb-4464-4324-905e-cd86fdac794e" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-lbj84"
	Nov 24 14:16:23 old-k8s-version-706771 kubelet[784]: I1124 14:16:23.859767     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmf8\" (UniqueName: \"kubernetes.io/projected/72a01016-3826-4d99-9b43-7c88b607e64f-kube-api-access-5pmf8\") pod \"kubernetes-dashboard-8694d4445c-54hbr\" (UID: \"72a01016-3826-4d99-9b43-7c88b607e64f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54hbr"
	Nov 24 14:16:23 old-k8s-version-706771 kubelet[784]: I1124 14:16:23.859827     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b1e8e1bb-4464-4324-905e-cd86fdac794e-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-lbj84\" (UID: \"b1e8e1bb-4464-4324-905e-cd86fdac794e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84"
	Nov 24 14:16:23 old-k8s-version-706771 kubelet[784]: I1124 14:16:23.859856     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/72a01016-3826-4d99-9b43-7c88b607e64f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-54hbr\" (UID: \"72a01016-3826-4d99-9b43-7c88b607e64f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54hbr"
	Nov 24 14:16:23 old-k8s-version-706771 kubelet[784]: I1124 14:16:23.859883     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsbmk\" (UniqueName: \"kubernetes.io/projected/b1e8e1bb-4464-4324-905e-cd86fdac794e-kube-api-access-wsbmk\") pod \"dashboard-metrics-scraper-5f989dc9cf-lbj84\" (UID: \"b1e8e1bb-4464-4324-905e-cd86fdac794e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84"
	Nov 24 14:16:24 old-k8s-version-706771 kubelet[784]: W1124 14:16:24.056982     784 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/crio-9e90b372dadb18b384c7a861d678fbb49e7cc4ee6d9c0587b6cad81a80a4d40e WatchSource:0}: Error finding container 9e90b372dadb18b384c7a861d678fbb49e7cc4ee6d9c0587b6cad81a80a4d40e: Status 404 returned error can't find the container with id 9e90b372dadb18b384c7a861d678fbb49e7cc4ee6d9c0587b6cad81a80a4d40e
	Nov 24 14:16:24 old-k8s-version-706771 kubelet[784]: W1124 14:16:24.076621     784 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2c35ba6c594249d37f1a4deea6c290d24e73eb238087551c9a88d96856e2c4a5/crio-6e6155867cf0eea6395d290dc3654b576c763ea67d4313e21a26f52430a9b11c WatchSource:0}: Error finding container 6e6155867cf0eea6395d290dc3654b576c763ea67d4313e21a26f52430a9b11c: Status 404 returned error can't find the container with id 6e6155867cf0eea6395d290dc3654b576c763ea67d4313e21a26f52430a9b11c
	Nov 24 14:16:33 old-k8s-version-706771 kubelet[784]: I1124 14:16:33.513415     784 scope.go:117] "RemoveContainer" containerID="b68615eb19f6fad93918a4d6d1d139f507dac74bb2277d72ce1ed87c49ad2d78"
	Nov 24 14:16:33 old-k8s-version-706771 kubelet[784]: I1124 14:16:33.532193     784 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54hbr" podStartSLOduration=5.833518348 podCreationTimestamp="2025-11-24 14:16:23 +0000 UTC" firstStartedPulling="2025-11-24 14:16:24.062191268 +0000 UTC m=+19.898572927" lastFinishedPulling="2025-11-24 14:16:28.760795979 +0000 UTC m=+24.597177695" observedRunningTime="2025-11-24 14:16:29.520903158 +0000 UTC m=+25.357284825" watchObservedRunningTime="2025-11-24 14:16:33.532123116 +0000 UTC m=+29.368504775"
	Nov 24 14:16:34 old-k8s-version-706771 kubelet[784]: I1124 14:16:34.518334     784 scope.go:117] "RemoveContainer" containerID="aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759"
	Nov 24 14:16:34 old-k8s-version-706771 kubelet[784]: E1124 14:16:34.519102     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbj84_kubernetes-dashboard(b1e8e1bb-4464-4324-905e-cd86fdac794e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84" podUID="b1e8e1bb-4464-4324-905e-cd86fdac794e"
	Nov 24 14:16:34 old-k8s-version-706771 kubelet[784]: I1124 14:16:34.519469     784 scope.go:117] "RemoveContainer" containerID="b68615eb19f6fad93918a4d6d1d139f507dac74bb2277d72ce1ed87c49ad2d78"
	Nov 24 14:16:35 old-k8s-version-706771 kubelet[784]: I1124 14:16:35.521246     784 scope.go:117] "RemoveContainer" containerID="aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759"
	Nov 24 14:16:35 old-k8s-version-706771 kubelet[784]: E1124 14:16:35.521541     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbj84_kubernetes-dashboard(b1e8e1bb-4464-4324-905e-cd86fdac794e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84" podUID="b1e8e1bb-4464-4324-905e-cd86fdac794e"
	Nov 24 14:16:41 old-k8s-version-706771 kubelet[784]: I1124 14:16:41.536665     784 scope.go:117] "RemoveContainer" containerID="3f6d39d6f582e8f8f5d54e4b32d73192d578d30ce39530503930d8ec0e325ccd"
	Nov 24 14:16:44 old-k8s-version-706771 kubelet[784]: I1124 14:16:44.042139     784 scope.go:117] "RemoveContainer" containerID="aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759"
	Nov 24 14:16:44 old-k8s-version-706771 kubelet[784]: I1124 14:16:44.550703     784 scope.go:117] "RemoveContainer" containerID="aae3155d6c605f8339e180eb85589bc068e664c896a0197580d0458ae3f32759"
	Nov 24 14:16:44 old-k8s-version-706771 kubelet[784]: I1124 14:16:44.551171     784 scope.go:117] "RemoveContainer" containerID="943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6"
	Nov 24 14:16:44 old-k8s-version-706771 kubelet[784]: E1124 14:16:44.551619     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbj84_kubernetes-dashboard(b1e8e1bb-4464-4324-905e-cd86fdac794e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84" podUID="b1e8e1bb-4464-4324-905e-cd86fdac794e"
	Nov 24 14:16:54 old-k8s-version-706771 kubelet[784]: I1124 14:16:54.042235     784 scope.go:117] "RemoveContainer" containerID="943b2469fd266d2e12ec380d77c35b02f3076920d7d98bb2179ebb87698073e6"
	Nov 24 14:16:54 old-k8s-version-706771 kubelet[784]: E1124 14:16:54.042552     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbj84_kubernetes-dashboard(b1e8e1bb-4464-4324-905e-cd86fdac794e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbj84" podUID="b1e8e1bb-4464-4324-905e-cd86fdac794e"
	Nov 24 14:16:56 old-k8s-version-706771 kubelet[784]: I1124 14:16:56.926290     784 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 14:16:56 old-k8s-version-706771 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:16:56 old-k8s-version-706771 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:16:56 old-k8s-version-706771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7cacf35d21de93fc134f70c789842b0bb01f94a9f81d988ca016c0707c19e476] <==
	2025/11/24 14:16:28 Starting overwatch
	2025/11/24 14:16:28 Using namespace: kubernetes-dashboard
	2025/11/24 14:16:28 Using in-cluster config to connect to apiserver
	2025/11/24 14:16:28 Using secret token for csrf signing
	2025/11/24 14:16:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:16:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:16:28 Successful initial request to the apiserver, version: v1.28.0
	2025/11/24 14:16:28 Generating JWE encryption key
	2025/11/24 14:16:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:16:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:16:29 Initializing JWE encryption key from synchronized object
	2025/11/24 14:16:29 Creating in-cluster Sidecar client
	2025/11/24 14:16:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:16:29 Serving insecurely on HTTP port: 9090
	2025/11/24 14:16:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3f6d39d6f582e8f8f5d54e4b32d73192d578d30ce39530503930d8ec0e325ccd] <==
	I1124 14:16:11.383513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:16:41.393499       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fd07626170d21832af144510fb26073eb6f20f7c7dcca410390fc64f77e7f864] <==
	I1124 14:16:41.593500       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:16:41.606382       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:16:41.606432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 14:16:59.015904       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:16:59.016118       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706771_6bb6e9c0-2467-41b0-8161-cacc4f40415b!
	I1124 14:16:59.018874       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"83d8df4a-4d76-4886-b058-96eaa24ce4dc", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-706771_6bb6e9c0-2467-41b0-8161-cacc4f40415b became leader
	I1124 14:16:59.117240       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706771_6bb6e9c0-2467-41b0-8161-cacc4f40415b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-706771 -n old-k8s-version-706771
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-706771 -n old-k8s-version-706771: exit status 2 (423.233548ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-706771 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-444317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-444317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (290.198348ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:18:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-444317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-444317 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-444317 describe deploy/metrics-server -n kube-system: exit status 1 (93.221013ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-444317 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-444317
helpers_test.go:243: (dbg) docker inspect no-preload-444317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce",
	        "Created": "2025-11-24T14:17:08.709891648Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 188603,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:17:08.833017463Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/hostname",
	        "HostsPath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/hosts",
	        "LogPath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce-json.log",
	        "Name": "/no-preload-444317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-444317:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-444317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce",
	                "LowerDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-444317",
	                "Source": "/var/lib/docker/volumes/no-preload-444317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-444317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-444317",
	                "name.minikube.sigs.k8s.io": "no-preload-444317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "453c7cb8a08130038db6d4482bc44748289f1419187b4c6b912bde938be6946e",
	            "SandboxKey": "/var/run/docker/netns/453c7cb8a081",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-444317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:f6:e6:04:73:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02f1d732c24a57a6012dfe448076c210da6d01bbcb8679ec8ce3692995d11521",
	                    "EndpointID": "caa1fb975e8ca2ec4cd8b0a135b62044f7aee1c9986be63246700192be0f79ef",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-444317",
	                        "ade20648158a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-444317 -n no-preload-444317
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-444317 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-444317 logs -n 25: (1.198462978s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-626991 sudo crio config                                                                                                                                                                                                             │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │                     │
	│ delete  │ -p cilium-626991                                                                                                                                                                                                                              │ cilium-626991             │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p force-systemd-env-289577 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-289577  │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ ssh     │ force-systemd-flag-928059 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-928059 │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ delete  │ -p force-systemd-flag-928059                                                                                                                                                                                                                  │ force-systemd-flag-928059 │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-032076    │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p force-systemd-env-289577                                                                                                                                                                                                                   │ force-systemd-env-289577  │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p cert-options-097221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:14 UTC │
	│ ssh     │ cert-options-097221 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ ssh     │ -p cert-options-097221 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p cert-options-097221                                                                                                                                                                                                                        │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │                     │
	│ stop    │ -p old-k8s-version-706771 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:16 UTC │
	│ image   │ old-k8s-version-706771 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │ 24 Nov 25 14:16 UTC │
	│ pause   │ -p old-k8s-version-706771 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │                     │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-032076    │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317         │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:18 UTC │
	│ delete  │ -p cert-expiration-032076                                                                                                                                                                                                                     │ cert-expiration-032076    │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293        │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-444317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-444317         │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:17:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:17:37.825788  191849 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:17:37.825994  191849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:17:37.826020  191849 out.go:374] Setting ErrFile to fd 2...
	I1124 14:17:37.826040  191849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:17:37.826340  191849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:17:37.826793  191849 out.go:368] Setting JSON to false
	I1124 14:17:37.827783  191849 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7209,"bootTime":1763986649,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:17:37.827886  191849 start.go:143] virtualization:  
	I1124 14:17:37.831125  191849 out.go:179] * [embed-certs-720293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:17:37.835205  191849 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:17:37.835394  191849 notify.go:221] Checking for updates...
	I1124 14:17:37.841645  191849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:17:37.844598  191849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:17:37.847422  191849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:17:37.850683  191849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:17:37.853596  191849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:17:37.856972  191849 config.go:182] Loaded profile config "no-preload-444317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:17:37.857135  191849 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:17:37.893438  191849 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:17:37.893541  191849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:17:37.978385  191849 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-24 14:17:37.968086962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:17:37.978500  191849 docker.go:319] overlay module found
	I1124 14:17:37.982179  191849 out.go:179] * Using the docker driver based on user configuration
	I1124 14:17:37.985291  191849 start.go:309] selected driver: docker
	I1124 14:17:37.985307  191849 start.go:927] validating driver "docker" against <nil>
	I1124 14:17:37.985319  191849 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:17:37.985986  191849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:17:38.090142  191849 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-24 14:17:38.080077219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:17:38.090293  191849 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:17:38.090521  191849 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:17:38.093554  191849 out.go:179] * Using Docker driver with root privileges
	I1124 14:17:38.096358  191849 cni.go:84] Creating CNI manager for ""
	I1124 14:17:38.096432  191849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:17:38.096454  191849 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:17:38.096537  191849 start.go:353] cluster config:
	{Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:17:38.100451  191849 out.go:179] * Starting "embed-certs-720293" primary control-plane node in "embed-certs-720293" cluster
	I1124 14:17:38.103567  191849 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:17:38.106455  191849 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:17:38.109449  191849 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:17:38.109490  191849 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:17:38.109500  191849 cache.go:65] Caching tarball of preloaded images
	I1124 14:17:38.109514  191849 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:17:38.109586  191849 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:17:38.109595  191849 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:17:38.109698  191849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/config.json ...
	I1124 14:17:38.109717  191849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/config.json: {Name:mk257a47f0076ae5767e48bdf6560ebf534214e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:17:38.134380  191849 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:17:38.134402  191849 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:17:38.134415  191849 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:17:38.134443  191849 start.go:360] acquireMachinesLock for embed-certs-720293: {Name:mk63d8a86030ce5af3799b85ca4bd5722aa0f10b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:17:38.134552  191849 start.go:364] duration metric: took 93.572µs to acquireMachinesLock for "embed-certs-720293"
	I1124 14:17:38.134576  191849 start.go:93] Provisioning new machine with config: &{Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:17:38.134653  191849 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:17:37.369046  188150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/proxy-client.crt ...
	I1124 14:17:37.369076  188150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/proxy-client.crt: {Name:mkd0a8e474ba04e6ae25bfbea11d20d4987d1ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:17:37.369253  188150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/proxy-client.key ...
	I1124 14:17:37.369262  188150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/proxy-client.key: {Name:mke48b431fbccb0ee6696187418c194b00e8d13e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:17:37.369443  188150 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:17:37.369519  188150 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:17:37.369528  188150 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:17:37.369570  188150 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:17:37.369596  188150 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:17:37.369621  188150 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:17:37.369668  188150 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:17:37.370229  188150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:17:37.392104  188150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:17:37.416505  188150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:17:37.441607  188150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:17:37.464247  188150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 14:17:37.485749  188150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:17:37.506167  188150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:17:37.531945  188150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:17:37.555952  188150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:17:37.577376  188150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:17:37.636136  188150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:17:37.655976  188150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:17:37.671019  188150 ssh_runner.go:195] Run: openssl version
	I1124 14:17:37.678827  188150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:17:37.687598  188150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:17:37.692649  188150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:17:37.692716  188150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:17:37.738407  188150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:17:37.746893  188150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:17:37.754929  188150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:17:37.759902  188150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:17:37.759964  188150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:17:37.806668  188150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:17:37.815288  188150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:17:37.823661  188150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:17:37.828158  188150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:17:37.828215  188150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:17:37.871310  188150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:17:37.881017  188150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:17:37.892633  188150 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:17:37.892686  188150 kubeadm.go:401] StartCluster: {Name:no-preload-444317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-444317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:17:37.892758  188150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:17:37.892829  188150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:17:37.939339  188150 cri.go:89] found id: ""
	I1124 14:17:37.939930  188150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:17:37.951891  188150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:17:37.969094  188150 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:17:37.969151  188150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:17:37.985065  188150 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:17:37.985084  188150 kubeadm.go:158] found existing configuration files:
	
	I1124 14:17:37.985137  188150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:17:38.022772  188150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:17:38.022841  188150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:17:38.045267  188150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:17:38.058803  188150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:17:38.058873  188150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:17:38.069680  188150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:17:38.080068  188150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:17:38.080135  188150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:17:38.091871  188150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:17:38.102099  188150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:17:38.102200  188150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:17:38.114113  188150 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:17:38.170964  188150 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:17:38.171298  188150 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:17:38.210279  188150 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:17:38.210349  188150 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:17:38.210384  188150 kubeadm.go:319] OS: Linux
	I1124 14:17:38.210430  188150 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:17:38.210477  188150 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:17:38.210524  188150 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:17:38.210571  188150 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:17:38.210619  188150 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:17:38.210667  188150 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:17:38.210712  188150 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:17:38.210760  188150 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:17:38.210805  188150 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:17:38.322766  188150 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:17:38.322876  188150 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:17:38.322967  188150 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:17:38.355732  188150 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:17:38.362666  188150 out.go:252]   - Generating certificates and keys ...
	I1124 14:17:38.362761  188150 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:17:38.362829  188150 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:17:38.471635  188150 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:17:38.879585  188150 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:17:39.570017  188150 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:17:39.769954  188150 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:17:40.401523  188150 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:17:40.402115  188150 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-444317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:17:40.529975  188150 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:17:40.530562  188150 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-444317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:17:41.858532  188150 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:17:42.227916  188150 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:17:38.141405  191849 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:17:38.141657  191849 start.go:159] libmachine.API.Create for "embed-certs-720293" (driver="docker")
	I1124 14:17:38.141692  191849 client.go:173] LocalClient.Create starting
	I1124 14:17:38.141765  191849 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem
	I1124 14:17:38.141801  191849 main.go:143] libmachine: Decoding PEM data...
	I1124 14:17:38.141817  191849 main.go:143] libmachine: Parsing certificate...
	I1124 14:17:38.141866  191849 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem
	I1124 14:17:38.141884  191849 main.go:143] libmachine: Decoding PEM data...
	I1124 14:17:38.141899  191849 main.go:143] libmachine: Parsing certificate...
	I1124 14:17:38.142263  191849 cli_runner.go:164] Run: docker network inspect embed-certs-720293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:17:38.158368  191849 cli_runner.go:211] docker network inspect embed-certs-720293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:17:38.158447  191849 network_create.go:284] running [docker network inspect embed-certs-720293] to gather additional debugging logs...
	I1124 14:17:38.158463  191849 cli_runner.go:164] Run: docker network inspect embed-certs-720293
	W1124 14:17:38.183675  191849 cli_runner.go:211] docker network inspect embed-certs-720293 returned with exit code 1
	I1124 14:17:38.183702  191849 network_create.go:287] error running [docker network inspect embed-certs-720293]: docker network inspect embed-certs-720293: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-720293 not found
	I1124 14:17:38.183727  191849 network_create.go:289] output of [docker network inspect embed-certs-720293]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-720293 not found
	
	** /stderr **
	I1124 14:17:38.183819  191849 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:17:38.206763  191849 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3087ee9f269 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:07:60:94:e6:54} reservation:<nil>}
	I1124 14:17:38.207083  191849 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-87dca5a19352 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:6c:c1:85:45:94} reservation:<nil>}
	I1124 14:17:38.207437  191849 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9e995bd1b79e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:f1:73:f5:6f:cf} reservation:<nil>}
	I1124 14:17:38.207711  191849 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-02f1d732c24a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:72:ae:e3:8e:29} reservation:<nil>}
	I1124 14:17:38.208076  191849 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d1ce0}
	I1124 14:17:38.208093  191849 network_create.go:124] attempt to create docker network embed-certs-720293 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 14:17:38.208156  191849 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-720293 embed-certs-720293
	I1124 14:17:38.292109  191849 network_create.go:108] docker network embed-certs-720293 192.168.85.0/24 created
	I1124 14:17:38.292190  191849 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-720293" container
	I1124 14:17:38.292293  191849 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:17:38.308952  191849 cli_runner.go:164] Run: docker volume create embed-certs-720293 --label name.minikube.sigs.k8s.io=embed-certs-720293 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:17:38.336159  191849 oci.go:103] Successfully created a docker volume embed-certs-720293
	I1124 14:17:38.336235  191849 cli_runner.go:164] Run: docker run --rm --name embed-certs-720293-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-720293 --entrypoint /usr/bin/test -v embed-certs-720293:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:17:38.996951  191849 oci.go:107] Successfully prepared a docker volume embed-certs-720293
	I1124 14:17:38.997020  191849 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:17:38.997031  191849 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:17:38.997103  191849 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-720293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:17:43.553670  188150 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:17:43.553952  188150 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:17:43.644424  188150 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:17:45.705071  188150 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:17:46.096714  188150 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:17:47.145251  188150 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:17:47.344923  188150 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:17:47.346602  188150 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:17:47.354763  188150 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:17:44.653238  191849 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-720293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.656100028s)
	I1124 14:17:44.653270  191849 kic.go:203] duration metric: took 5.656235751s to extract preloaded images to volume ...
	W1124 14:17:44.653412  191849 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 14:17:44.653520  191849 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:17:44.766130  191849 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-720293 --name embed-certs-720293 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-720293 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-720293 --network embed-certs-720293 --ip 192.168.85.2 --volume embed-certs-720293:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:17:45.185561  191849 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Running}}
	I1124 14:17:45.240201  191849 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:17:45.285444  191849 cli_runner.go:164] Run: docker exec embed-certs-720293 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:17:45.365560  191849 oci.go:144] the created container "embed-certs-720293" has a running status.
	I1124 14:17:45.365718  191849 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa...
	I1124 14:17:45.746113  191849 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:17:45.788903  191849 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:17:45.825470  191849 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:17:45.825492  191849 kic_runner.go:114] Args: [docker exec --privileged embed-certs-720293 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:17:45.936446  191849 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:17:45.997603  191849 machine.go:94] provisionDockerMachine start ...
	I1124 14:17:45.997706  191849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:17:46.034251  191849 main.go:143] libmachine: Using SSH client type: native
	I1124 14:17:46.034661  191849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:17:46.034681  191849 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:17:46.035494  191849 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:17:47.358042  188150 out.go:252]   - Booting up control plane ...
	I1124 14:17:47.358157  188150 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:17:47.358243  188150 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:17:47.358322  188150 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:17:47.374601  188150 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:17:47.374743  188150 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:17:47.383497  188150 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:17:47.383876  188150 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:17:47.384066  188150 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:17:47.554849  188150 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:17:47.554972  188150 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:17:48.555679  188150 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001951442s
	I1124 14:17:48.569341  188150 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:17:48.569441  188150 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 14:17:48.569747  188150 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:17:48.569833  188150 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:17:49.242323  191849 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-720293
	
	I1124 14:17:49.242349  191849 ubuntu.go:182] provisioning hostname "embed-certs-720293"
	I1124 14:17:49.242465  191849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:17:49.269002  191849 main.go:143] libmachine: Using SSH client type: native
	I1124 14:17:49.269310  191849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:17:49.269326  191849 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-720293 && echo "embed-certs-720293" | sudo tee /etc/hostname
	I1124 14:17:49.477089  191849 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-720293
	
	I1124 14:17:49.477206  191849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:17:49.500490  191849 main.go:143] libmachine: Using SSH client type: native
	I1124 14:17:49.500818  191849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:17:49.500834  191849 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-720293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-720293/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-720293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:17:49.683689  191849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:17:49.683719  191849 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:17:49.683769  191849 ubuntu.go:190] setting up certificates
	I1124 14:17:49.683778  191849 provision.go:84] configureAuth start
	I1124 14:17:49.683855  191849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-720293
	I1124 14:17:49.717389  191849 provision.go:143] copyHostCerts
	I1124 14:17:49.717463  191849 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:17:49.717472  191849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:17:49.717549  191849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:17:49.717644  191849 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:17:49.717651  191849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:17:49.717678  191849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:17:49.717732  191849 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:17:49.717736  191849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:17:49.717759  191849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:17:49.717836  191849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.embed-certs-720293 san=[127.0.0.1 192.168.85.2 embed-certs-720293 localhost minikube]
	I1124 14:17:50.021260  191849 provision.go:177] copyRemoteCerts
	I1124 14:17:50.021382  191849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:17:50.021444  191849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:17:50.041463  191849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:17:50.166156  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:17:50.215421  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 14:17:50.252792  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:17:50.295604  191849 provision.go:87] duration metric: took 611.785586ms to configureAuth
	I1124 14:17:50.295686  191849 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:17:50.295959  191849 config.go:182] Loaded profile config "embed-certs-720293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:17:50.296158  191849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:17:50.337836  191849 main.go:143] libmachine: Using SSH client type: native
	I1124 14:17:50.338199  191849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:17:50.338218  191849 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:17:50.827940  191849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:17:50.827974  191849 machine.go:97] duration metric: took 4.830350178s to provisionDockerMachine
	I1124 14:17:50.827985  191849 client.go:176] duration metric: took 12.686286409s to LocalClient.Create
	I1124 14:17:50.828001  191849 start.go:167] duration metric: took 12.686345847s to libmachine.API.Create "embed-certs-720293"
	I1124 14:17:50.828009  191849 start.go:293] postStartSetup for "embed-certs-720293" (driver="docker")
	I1124 14:17:50.828036  191849 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:17:50.828109  191849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:17:50.828151  191849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:17:50.857010  191849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:17:50.985932  191849 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:17:50.992017  191849 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:17:50.992097  191849 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:17:50.992124  191849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:17:50.992219  191849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:17:50.992357  191849 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:17:50.992528  191849 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:17:51.010282  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:17:51.034913  191849 start.go:296] duration metric: took 206.889244ms for postStartSetup
	I1124 14:17:51.035426  191849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-720293
	I1124 14:17:51.078308  191849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/config.json ...
	I1124 14:17:51.078590  191849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:17:51.078631  191849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:17:51.109646  191849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:17:51.215571  191849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:17:51.225435  191849 start.go:128] duration metric: took 13.090765932s to createHost
	I1124 14:17:51.225464  191849 start.go:83] releasing machines lock for "embed-certs-720293", held for 13.09090314s
	I1124 14:17:51.225541  191849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-720293
	I1124 14:17:51.257100  191849 ssh_runner.go:195] Run: cat /version.json
	I1124 14:17:51.257204  191849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:17:51.257432  191849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:17:51.257486  191849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:17:51.291435  191849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:17:51.291537  191849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:17:51.415728  191849 ssh_runner.go:195] Run: systemctl --version
	I1124 14:17:51.517644  191849 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:17:51.586525  191849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:17:51.594834  191849 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:17:51.594925  191849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:17:51.644286  191849 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:17:51.644327  191849 start.go:496] detecting cgroup driver to use...
	I1124 14:17:51.644358  191849 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:17:51.644422  191849 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:17:51.675130  191849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:17:51.693348  191849 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:17:51.693423  191849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:17:51.723086  191849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:17:51.751931  191849 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:17:51.942217  191849 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:17:52.158895  191849 docker.go:234] disabling docker service ...
	I1124 14:17:52.158979  191849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:17:52.188796  191849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:17:52.204768  191849 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:17:52.414992  191849 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:17:52.605058  191849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:17:52.624948  191849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:17:52.648258  191849 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:17:52.648371  191849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:17:52.660470  191849 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:17:52.660582  191849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:17:52.669973  191849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:17:52.686273  191849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:17:52.694816  191849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:17:52.707321  191849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:17:52.723940  191849 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:17:52.742761  191849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:17:52.754034  191849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:17:52.761959  191849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:17:52.770109  191849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:17:52.964417  191849 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:17:53.223775  191849 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:17:53.223914  191849 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:17:53.228311  191849 start.go:564] Will wait 60s for crictl version
	I1124 14:17:53.228422  191849 ssh_runner.go:195] Run: which crictl
	I1124 14:17:53.232117  191849 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:17:53.275669  191849 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:17:53.275827  191849 ssh_runner.go:195] Run: crio --version
	I1124 14:17:53.350984  191849 ssh_runner.go:195] Run: crio --version
	I1124 14:17:53.384630  191849 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:17:53.387655  191849 cli_runner.go:164] Run: docker network inspect embed-certs-720293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:17:53.414167  191849 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:17:53.418183  191849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:17:53.433944  191849 kubeadm.go:884] updating cluster {Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:17:53.434069  191849 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:17:53.434120  191849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:17:53.519377  191849 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:17:53.519397  191849 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:17:53.519452  191849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:17:53.572011  191849 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:17:53.572070  191849 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:17:53.572093  191849 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 14:17:53.572215  191849 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-720293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:17:53.572336  191849 ssh_runner.go:195] Run: crio config
	I1124 14:17:53.699931  191849 cni.go:84] Creating CNI manager for ""
	I1124 14:17:53.699993  191849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:17:53.700028  191849 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:17:53.700081  191849 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-720293 NodeName:embed-certs-720293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:17:53.700239  191849 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-720293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:17:53.700332  191849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:17:53.708330  191849 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:17:53.708483  191849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:17:53.716225  191849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 14:17:53.730209  191849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:17:53.743099  191849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1124 14:17:53.756110  191849 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:17:53.759819  191849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:17:53.768921  191849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:17:53.958309  191849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:17:53.978838  191849 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293 for IP: 192.168.85.2
	I1124 14:17:53.978906  191849 certs.go:195] generating shared ca certs ...
	I1124 14:17:53.978936  191849 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:17:53.979135  191849 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:17:53.979207  191849 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:17:53.979242  191849 certs.go:257] generating profile certs ...
	I1124 14:17:53.979318  191849 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/client.key
	I1124 14:17:53.979424  191849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/client.crt with IP's: []
	I1124 14:17:54.266616  191849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/client.crt ...
	I1124 14:17:54.266691  191849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/client.crt: {Name:mk4a45558cbb4684d1bf79ce2f5f144d7d5d41c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:17:54.266904  191849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/client.key ...
	I1124 14:17:54.266944  191849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/client.key: {Name:mk1936967a90ee9f844ee88033f898f5eaea50ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:17:54.267073  191849 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.key.8c3742eb
	I1124 14:17:54.267113  191849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.crt.8c3742eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 14:17:54.409009  191849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.crt.8c3742eb ...
	I1124 14:17:54.409079  191849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.crt.8c3742eb: {Name:mk666cfc2f408bf3666f600a17d71488a69dc14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:17:54.409282  191849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.key.8c3742eb ...
	I1124 14:17:54.409321  191849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.key.8c3742eb: {Name:mk6851a2045324b8097fac451639818f657afd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:17:54.409446  191849 certs.go:382] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.crt.8c3742eb -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.crt
	I1124 14:17:54.409571  191849 certs.go:386] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.key.8c3742eb -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.key
	I1124 14:17:54.409669  191849 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.key
	I1124 14:17:54.409711  191849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.crt with IP's: []
	I1124 14:17:55.163602  191849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.crt ...
	I1124 14:17:55.163677  191849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.crt: {Name:mkd041951efcb4cbb9f1a465cbcdf49f5b5082ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:17:55.163913  191849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.key ...
	I1124 14:17:55.163950  191849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.key: {Name:mkf70bf6ecfbf93ea57647acbf521f2b3b78c97b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:17:55.164211  191849 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:17:55.164281  191849 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:17:55.164307  191849 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:17:55.164366  191849 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:17:55.164416  191849 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:17:55.164484  191849 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:17:55.164571  191849 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:17:55.165294  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:17:55.196429  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:17:55.221408  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:17:55.244521  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:17:55.274125  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 14:17:55.304641  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:17:55.335642  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:17:55.367248  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:17:55.392768  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:17:55.424583  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:17:55.451587  191849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:17:55.472072  191849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:17:55.498276  191849 ssh_runner.go:195] Run: openssl version
	I1124 14:17:55.510095  191849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:17:55.524474  191849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:17:55.529498  191849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:17:55.529623  191849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:17:55.602003  191849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:17:55.622758  191849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:17:55.636862  191849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:17:55.641569  191849 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:17:55.641696  191849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:17:55.689610  191849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:17:55.699411  191849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:17:55.731771  191849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:17:55.735925  191849 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:17:55.736070  191849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:17:55.826347  191849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:17:55.840614  191849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:17:55.844846  191849 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:17:55.844961  191849 kubeadm.go:401] StartCluster: {Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:17:55.845127  191849 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:17:55.845220  191849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:17:55.901916  191849 cri.go:89] found id: ""
	I1124 14:17:55.902034  191849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:17:55.914254  191849 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:17:55.925701  191849 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:17:55.925817  191849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:17:55.940185  191849 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:17:55.940251  191849 kubeadm.go:158] found existing configuration files:
	
	I1124 14:17:55.940332  191849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:17:55.955614  191849 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:17:55.955797  191849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:17:55.969519  191849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:17:55.986326  191849 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:17:55.986461  191849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:17:56.001827  191849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:17:56.018838  191849 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:17:56.018969  191849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:17:56.033566  191849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:17:56.046603  191849 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:17:56.046739  191849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:17:56.061526  191849 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:17:56.143594  191849 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:17:56.145249  191849 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:17:56.185055  191849 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:17:56.185204  191849 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:17:56.185276  191849 kubeadm.go:319] OS: Linux
	I1124 14:17:56.185361  191849 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:17:56.185452  191849 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:17:56.185531  191849 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:17:56.185610  191849 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:17:56.185698  191849 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:17:56.185777  191849 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:17:56.185854  191849 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:17:56.185934  191849 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:17:56.186012  191849 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:17:56.308905  191849 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:17:56.309035  191849 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:17:56.309186  191849 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:17:56.323784  191849 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:17:56.199145  188150 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.629212296s
	I1124 14:17:56.330222  191849 out.go:252]   - Generating certificates and keys ...
	I1124 14:17:56.330389  191849 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:17:56.330491  191849 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:17:56.872046  191849 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:17:56.941182  191849 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:17:57.444295  191849 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:17:59.572176  188150 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.002584614s
	I1124 14:18:00.135884  188150 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 11.564937748s
	I1124 14:18:00.266390  188150 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:18:00.319246  188150 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:18:00.351127  188150 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:18:00.351337  188150 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-444317 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:18:00.371795  188150 kubeadm.go:319] [bootstrap-token] Using token: ix487n.8exu41k6gp553gqa
	I1124 14:18:00.374818  188150 out.go:252]   - Configuring RBAC rules ...
	I1124 14:18:00.374937  188150 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:18:00.387348  188150 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:18:00.399712  188150 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:18:00.409007  188150 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:18:00.415509  188150 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:18:00.423189  188150 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:18:00.545631  188150 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:18:01.163772  188150 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:18:01.542588  188150 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:18:01.544296  188150 kubeadm.go:319] 
	I1124 14:18:01.544394  188150 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:18:01.544404  188150 kubeadm.go:319] 
	I1124 14:18:01.544490  188150 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:18:01.544498  188150 kubeadm.go:319] 
	I1124 14:18:01.544527  188150 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:18:01.545006  188150 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:18:01.545096  188150 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:18:01.545105  188150 kubeadm.go:319] 
	I1124 14:18:01.545174  188150 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:18:01.545182  188150 kubeadm.go:319] 
	I1124 14:18:01.545242  188150 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:18:01.545250  188150 kubeadm.go:319] 
	I1124 14:18:01.545302  188150 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:18:01.545412  188150 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:18:01.545494  188150 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:18:01.545503  188150 kubeadm.go:319] 
	I1124 14:18:01.545838  188150 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:18:01.545946  188150 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:18:01.545962  188150 kubeadm.go:319] 
	I1124 14:18:01.546326  188150 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ix487n.8exu41k6gp553gqa \
	I1124 14:18:01.546474  188150 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f \
	I1124 14:18:01.546761  188150 kubeadm.go:319] 	--control-plane 
	I1124 14:18:01.546782  188150 kubeadm.go:319] 
	I1124 14:18:01.547064  188150 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:18:01.547074  188150 kubeadm.go:319] 
	I1124 14:18:01.547439  188150 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ix487n.8exu41k6gp553gqa \
	I1124 14:18:01.547564  188150 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f 
	I1124 14:18:01.552541  188150 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:18:01.552764  188150 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:18:01.552880  188150 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:18:01.552892  188150 cni.go:84] Creating CNI manager for ""
	I1124 14:18:01.552900  188150 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:18:01.556005  188150 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:18:01.558715  188150 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:18:01.563846  188150 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:18:01.563869  188150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:18:01.587978  188150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:18:02.244654  188150 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:18:02.244787  188150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:02.244849  188150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-444317 minikube.k8s.io/updated_at=2025_11_24T14_18_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=no-preload-444317 minikube.k8s.io/primary=true
	I1124 14:17:58.131812  191849 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:17:59.060129  191849 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:17:59.060742  191849 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-720293 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:17:59.602690  191849 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:17:59.603585  191849 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-720293 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:18:02.163310  191849 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:18:02.619521  188150 ops.go:34] apiserver oom_adj: -16
	I1124 14:18:02.619634  188150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:03.119820  188150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:03.619764  188150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:04.119752  188150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:04.619713  188150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:05.120567  188150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:05.620060  188150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:06.056563  188150 kubeadm.go:1114] duration metric: took 3.811822544s to wait for elevateKubeSystemPrivileges
	I1124 14:18:06.056599  188150 kubeadm.go:403] duration metric: took 28.163917999s to StartCluster
	I1124 14:18:06.056618  188150 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:18:06.056692  188150 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:18:06.057401  188150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:18:06.057643  188150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:18:06.057647  188150 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:18:06.057908  188150 config.go:182] Loaded profile config "no-preload-444317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:18:06.057954  188150 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:18:06.058029  188150 addons.go:70] Setting storage-provisioner=true in profile "no-preload-444317"
	I1124 14:18:06.058043  188150 addons.go:239] Setting addon storage-provisioner=true in "no-preload-444317"
	I1124 14:18:06.058064  188150 host.go:66] Checking if "no-preload-444317" exists ...
	I1124 14:18:06.058525  188150 cli_runner.go:164] Run: docker container inspect no-preload-444317 --format={{.State.Status}}
	I1124 14:18:06.059167  188150 addons.go:70] Setting default-storageclass=true in profile "no-preload-444317"
	I1124 14:18:06.059188  188150 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-444317"
	I1124 14:18:06.059521  188150 cli_runner.go:164] Run: docker container inspect no-preload-444317 --format={{.State.Status}}
	I1124 14:18:06.061455  188150 out.go:179] * Verifying Kubernetes components...
	I1124 14:18:06.065576  188150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:18:06.099630  188150 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:18:03.243777  191849 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:18:03.930066  191849 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:18:03.930386  191849 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:18:04.436213  191849 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:18:04.981001  191849 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:18:05.257871  191849 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:18:06.046374  191849 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:18:06.433963  191849 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:18:06.434900  191849 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:18:06.438014  191849 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:18:06.103133  188150 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:18:06.103157  188150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:18:06.103214  188150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:06.104025  188150 addons.go:239] Setting addon default-storageclass=true in "no-preload-444317"
	I1124 14:18:06.104076  188150 host.go:66] Checking if "no-preload-444317" exists ...
	I1124 14:18:06.104514  188150 cli_runner.go:164] Run: docker container inspect no-preload-444317 --format={{.State.Status}}
	I1124 14:18:06.141098  188150 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:18:06.141120  188150 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:18:06.141185  188150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:06.148618  188150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/no-preload-444317/id_rsa Username:docker}
	I1124 14:18:06.175328  188150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/no-preload-444317/id_rsa Username:docker}
	I1124 14:18:06.530491  188150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:18:06.694909  188150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:18:06.695035  188150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:18:06.745518  188150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:18:06.441960  191849 out.go:252]   - Booting up control plane ...
	I1124 14:18:06.442159  191849 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:18:06.442246  191849 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:18:06.444092  191849 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:18:06.483227  191849 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:18:06.483335  191849 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:18:06.493188  191849 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:18:06.493291  191849 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:18:06.493331  191849 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:18:06.762893  191849 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:18:06.763012  191849 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:18:08.353455  188150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.822919819s)
	I1124 14:18:08.353548  188150 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.658487149s)
	I1124 14:18:08.353562  188150 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.658627294s)
	I1124 14:18:08.353757  188150 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 14:18:08.353600  188150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.608035027s)
	I1124 14:18:08.356625  188150 node_ready.go:35] waiting up to 6m0s for node "no-preload-444317" to be "Ready" ...
	I1124 14:18:08.402528  188150 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 14:18:08.406738  188150 addons.go:530] duration metric: took 2.348775037s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 14:18:08.860358  188150 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-444317" context rescaled to 1 replicas
	W1124 14:18:10.359800  188150 node_ready.go:57] node "no-preload-444317" has "Ready":"False" status (will retry)
	I1124 14:18:08.259723  191849 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500976204s
	I1124 14:18:08.261030  191849 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:18:08.261379  191849 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 14:18:08.261710  191849 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:18:08.262002  191849 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:18:12.307309  191849 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.045040812s
	I1124 14:18:14.227930  191849 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.965307639s
	I1124 14:18:16.264264  191849 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002429206s
	I1124 14:18:16.285532  191849 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:18:16.301993  191849 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:18:16.313975  191849 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:18:16.314183  191849 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-720293 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:18:16.325644  191849 kubeadm.go:319] [bootstrap-token] Using token: 6trk9u.fkfz6d6kbgwfumdl
	W1124 14:18:12.360120  188150 node_ready.go:57] node "no-preload-444317" has "Ready":"False" status (will retry)
	W1124 14:18:14.360468  188150 node_ready.go:57] node "no-preload-444317" has "Ready":"False" status (will retry)
	W1124 14:18:16.860232  188150 node_ready.go:57] node "no-preload-444317" has "Ready":"False" status (will retry)
	I1124 14:18:16.328625  191849 out.go:252]   - Configuring RBAC rules ...
	I1124 14:18:16.328751  191849 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:18:16.332456  191849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:18:16.342427  191849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:18:16.347457  191849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:18:16.354342  191849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:18:16.361307  191849 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:18:16.673532  191849 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:18:17.117908  191849 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:18:17.673457  191849 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:18:17.674470  191849 kubeadm.go:319] 
	I1124 14:18:17.674539  191849 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:18:17.674544  191849 kubeadm.go:319] 
	I1124 14:18:17.674621  191849 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:18:17.674625  191849 kubeadm.go:319] 
	I1124 14:18:17.674650  191849 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:18:17.674709  191849 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:18:17.674759  191849 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:18:17.674764  191849 kubeadm.go:319] 
	I1124 14:18:17.674818  191849 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:18:17.674822  191849 kubeadm.go:319] 
	I1124 14:18:17.674869  191849 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:18:17.674873  191849 kubeadm.go:319] 
	I1124 14:18:17.674925  191849 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:18:17.675000  191849 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:18:17.675068  191849 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:18:17.675071  191849 kubeadm.go:319] 
	I1124 14:18:17.675167  191849 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:18:17.675245  191849 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:18:17.675249  191849 kubeadm.go:319] 
	I1124 14:18:17.675333  191849 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6trk9u.fkfz6d6kbgwfumdl \
	I1124 14:18:17.675468  191849 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f \
	I1124 14:18:17.675490  191849 kubeadm.go:319] 	--control-plane 
	I1124 14:18:17.675494  191849 kubeadm.go:319] 
	I1124 14:18:17.675578  191849 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:18:17.675582  191849 kubeadm.go:319] 
	I1124 14:18:17.675664  191849 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6trk9u.fkfz6d6kbgwfumdl \
	I1124 14:18:17.675766  191849 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f 
	I1124 14:18:17.680619  191849 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:18:17.680847  191849 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:18:17.680957  191849 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:18:17.680977  191849 cni.go:84] Creating CNI manager for ""
	I1124 14:18:17.680985  191849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:18:17.684180  191849 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:18:17.687070  191849 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:18:17.691468  191849 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:18:17.691485  191849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:18:17.706301  191849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:18:18.336675  191849 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:18:18.336798  191849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:18.336866  191849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-720293 minikube.k8s.io/updated_at=2025_11_24T14_18_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=embed-certs-720293 minikube.k8s.io/primary=true
	I1124 14:18:18.371869  191849 ops.go:34] apiserver oom_adj: -16
	I1124 14:18:18.527851  191849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:19.028577  191849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:19.528398  191849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:20.028602  191849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:20.527958  191849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:21.027999  191849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:21.528548  191849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:22.028762  191849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:18:22.227228  191849 kubeadm.go:1114] duration metric: took 3.890473898s to wait for elevateKubeSystemPrivileges
	I1124 14:18:22.227260  191849 kubeadm.go:403] duration metric: took 26.382303242s to StartCluster
	I1124 14:18:22.227277  191849 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:18:22.227339  191849 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:18:22.228760  191849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:18:22.228993  191849 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:18:22.229108  191849 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:18:22.229351  191849 config.go:182] Loaded profile config "embed-certs-720293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:18:22.229386  191849 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:18:22.229449  191849 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-720293"
	I1124 14:18:22.229463  191849 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-720293"
	I1124 14:18:22.229487  191849 host.go:66] Checking if "embed-certs-720293" exists ...
	I1124 14:18:22.230054  191849 addons.go:70] Setting default-storageclass=true in profile "embed-certs-720293"
	I1124 14:18:22.230081  191849 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-720293"
	I1124 14:18:22.230370  191849 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:18:22.230585  191849 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:18:22.233712  191849 out.go:179] * Verifying Kubernetes components...
	I1124 14:18:22.239563  191849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:18:22.272043  191849 addons.go:239] Setting addon default-storageclass=true in "embed-certs-720293"
	I1124 14:18:22.272084  191849 host.go:66] Checking if "embed-certs-720293" exists ...
	I1124 14:18:22.274288  191849 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:18:22.278747  191849 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 14:18:18.860310  188150 node_ready.go:57] node "no-preload-444317" has "Ready":"False" status (will retry)
	W1124 14:18:21.359782  188150 node_ready.go:57] node "no-preload-444317" has "Ready":"False" status (will retry)
	I1124 14:18:22.281861  191849 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:18:22.281887  191849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:18:22.281959  191849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:18:22.318311  191849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:18:22.322711  191849 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:18:22.322734  191849 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:18:22.322805  191849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:18:22.370409  191849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:18:22.640881  191849 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:18:22.668871  191849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:18:22.710738  191849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:18:22.787638  191849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:18:23.619065  191849 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 14:18:23.621474  191849 node_ready.go:35] waiting up to 6m0s for node "embed-certs-720293" to be "Ready" ...
	I1124 14:18:23.948366  191849 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.160673632s)
	I1124 14:18:23.951439  191849 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 14:18:22.366476  188150 node_ready.go:49] node "no-preload-444317" is "Ready"
	I1124 14:18:22.366504  188150 node_ready.go:38] duration metric: took 14.00981622s for node "no-preload-444317" to be "Ready" ...
	I1124 14:18:22.366518  188150 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:18:22.366572  188150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:18:22.432714  188150 api_server.go:72] duration metric: took 16.375037193s to wait for apiserver process to appear ...
	I1124 14:18:22.432740  188150 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:18:22.432760  188150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:18:22.443688  188150 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 14:18:22.444941  188150 api_server.go:141] control plane version: v1.34.1
	I1124 14:18:22.444972  188150 api_server.go:131] duration metric: took 12.224501ms to wait for apiserver health ...
	I1124 14:18:22.444981  188150 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:18:22.449513  188150 system_pods.go:59] 8 kube-system pods found
	I1124 14:18:22.449561  188150 system_pods.go:61] "coredns-66bc5c9577-lrh58" [feb6ac32-bf93-4488-9574-cdc018d6c759] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:18:22.449567  188150 system_pods.go:61] "etcd-no-preload-444317" [53c3a0fc-8ca0-4f78-b868-7164585a0b4b] Running
	I1124 14:18:22.449572  188150 system_pods.go:61] "kindnet-zwxh6" [5314d14c-5c74-4df6-a25c-349e3ce92848] Running
	I1124 14:18:22.449576  188150 system_pods.go:61] "kube-apiserver-no-preload-444317" [46a06d31-31d6-4cb1-907b-1b57afa9d38d] Running
	I1124 14:18:22.449580  188150 system_pods.go:61] "kube-controller-manager-no-preload-444317" [23c76436-e21f-41f8-9e26-c1028d80c3fc] Running
	I1124 14:18:22.449584  188150 system_pods.go:61] "kube-proxy-m4fb4" [51e5fbb8-6216-4c92-a92e-a618ffdb2cf5] Running
	I1124 14:18:22.449587  188150 system_pods.go:61] "kube-scheduler-no-preload-444317" [283cab90-1b87-4ce1-8ea7-c7a0c42a13f6] Running
	I1124 14:18:22.449593  188150 system_pods.go:61] "storage-provisioner" [abff3443-a7cc-445c-94d1-a6e96ed61024] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:18:22.449601  188150 system_pods.go:74] duration metric: took 4.61513ms to wait for pod list to return data ...
	I1124 14:18:22.449610  188150 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:18:22.454163  188150 default_sa.go:45] found service account: "default"
	I1124 14:18:22.454206  188150 default_sa.go:55] duration metric: took 4.58953ms for default service account to be created ...
	I1124 14:18:22.454216  188150 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:18:22.457361  188150 system_pods.go:86] 8 kube-system pods found
	I1124 14:18:22.457402  188150 system_pods.go:89] "coredns-66bc5c9577-lrh58" [feb6ac32-bf93-4488-9574-cdc018d6c759] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:18:22.457409  188150 system_pods.go:89] "etcd-no-preload-444317" [53c3a0fc-8ca0-4f78-b868-7164585a0b4b] Running
	I1124 14:18:22.457415  188150 system_pods.go:89] "kindnet-zwxh6" [5314d14c-5c74-4df6-a25c-349e3ce92848] Running
	I1124 14:18:22.457421  188150 system_pods.go:89] "kube-apiserver-no-preload-444317" [46a06d31-31d6-4cb1-907b-1b57afa9d38d] Running
	I1124 14:18:22.457427  188150 system_pods.go:89] "kube-controller-manager-no-preload-444317" [23c76436-e21f-41f8-9e26-c1028d80c3fc] Running
	I1124 14:18:22.457431  188150 system_pods.go:89] "kube-proxy-m4fb4" [51e5fbb8-6216-4c92-a92e-a618ffdb2cf5] Running
	I1124 14:18:22.457435  188150 system_pods.go:89] "kube-scheduler-no-preload-444317" [283cab90-1b87-4ce1-8ea7-c7a0c42a13f6] Running
	I1124 14:18:22.457442  188150 system_pods.go:89] "storage-provisioner" [abff3443-a7cc-445c-94d1-a6e96ed61024] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:18:22.457478  188150 retry.go:31] will retry after 247.043667ms: missing components: kube-dns
	I1124 14:18:22.720922  188150 system_pods.go:86] 8 kube-system pods found
	I1124 14:18:22.721011  188150 system_pods.go:89] "coredns-66bc5c9577-lrh58" [feb6ac32-bf93-4488-9574-cdc018d6c759] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:18:22.721035  188150 system_pods.go:89] "etcd-no-preload-444317" [53c3a0fc-8ca0-4f78-b868-7164585a0b4b] Running
	I1124 14:18:22.721058  188150 system_pods.go:89] "kindnet-zwxh6" [5314d14c-5c74-4df6-a25c-349e3ce92848] Running
	I1124 14:18:22.721101  188150 system_pods.go:89] "kube-apiserver-no-preload-444317" [46a06d31-31d6-4cb1-907b-1b57afa9d38d] Running
	I1124 14:18:22.721124  188150 system_pods.go:89] "kube-controller-manager-no-preload-444317" [23c76436-e21f-41f8-9e26-c1028d80c3fc] Running
	I1124 14:18:22.721157  188150 system_pods.go:89] "kube-proxy-m4fb4" [51e5fbb8-6216-4c92-a92e-a618ffdb2cf5] Running
	I1124 14:18:22.721178  188150 system_pods.go:89] "kube-scheduler-no-preload-444317" [283cab90-1b87-4ce1-8ea7-c7a0c42a13f6] Running
	I1124 14:18:22.721216  188150 system_pods.go:89] "storage-provisioner" [abff3443-a7cc-445c-94d1-a6e96ed61024] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:18:22.721256  188150 retry.go:31] will retry after 264.666875ms: missing components: kube-dns
	I1124 14:18:23.018978  188150 system_pods.go:86] 8 kube-system pods found
	I1124 14:18:23.019083  188150 system_pods.go:89] "coredns-66bc5c9577-lrh58" [feb6ac32-bf93-4488-9574-cdc018d6c759] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:18:23.019144  188150 system_pods.go:89] "etcd-no-preload-444317" [53c3a0fc-8ca0-4f78-b868-7164585a0b4b] Running
	I1124 14:18:23.019185  188150 system_pods.go:89] "kindnet-zwxh6" [5314d14c-5c74-4df6-a25c-349e3ce92848] Running
	I1124 14:18:23.019207  188150 system_pods.go:89] "kube-apiserver-no-preload-444317" [46a06d31-31d6-4cb1-907b-1b57afa9d38d] Running
	I1124 14:18:23.019244  188150 system_pods.go:89] "kube-controller-manager-no-preload-444317" [23c76436-e21f-41f8-9e26-c1028d80c3fc] Running
	I1124 14:18:23.019276  188150 system_pods.go:89] "kube-proxy-m4fb4" [51e5fbb8-6216-4c92-a92e-a618ffdb2cf5] Running
	I1124 14:18:23.019316  188150 system_pods.go:89] "kube-scheduler-no-preload-444317" [283cab90-1b87-4ce1-8ea7-c7a0c42a13f6] Running
	I1124 14:18:23.019342  188150 system_pods.go:89] "storage-provisioner" [abff3443-a7cc-445c-94d1-a6e96ed61024] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:18:23.019398  188150 retry.go:31] will retry after 302.062771ms: missing components: kube-dns
	I1124 14:18:23.325770  188150 system_pods.go:86] 8 kube-system pods found
	I1124 14:18:23.325861  188150 system_pods.go:89] "coredns-66bc5c9577-lrh58" [feb6ac32-bf93-4488-9574-cdc018d6c759] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:18:23.325884  188150 system_pods.go:89] "etcd-no-preload-444317" [53c3a0fc-8ca0-4f78-b868-7164585a0b4b] Running
	I1124 14:18:23.325923  188150 system_pods.go:89] "kindnet-zwxh6" [5314d14c-5c74-4df6-a25c-349e3ce92848] Running
	I1124 14:18:23.325950  188150 system_pods.go:89] "kube-apiserver-no-preload-444317" [46a06d31-31d6-4cb1-907b-1b57afa9d38d] Running
	I1124 14:18:23.325977  188150 system_pods.go:89] "kube-controller-manager-no-preload-444317" [23c76436-e21f-41f8-9e26-c1028d80c3fc] Running
	I1124 14:18:23.326013  188150 system_pods.go:89] "kube-proxy-m4fb4" [51e5fbb8-6216-4c92-a92e-a618ffdb2cf5] Running
	I1124 14:18:23.326037  188150 system_pods.go:89] "kube-scheduler-no-preload-444317" [283cab90-1b87-4ce1-8ea7-c7a0c42a13f6] Running
	I1124 14:18:23.326061  188150 system_pods.go:89] "storage-provisioner" [abff3443-a7cc-445c-94d1-a6e96ed61024] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:18:23.326109  188150 retry.go:31] will retry after 518.353863ms: missing components: kube-dns
	I1124 14:18:23.849065  188150 system_pods.go:86] 8 kube-system pods found
	I1124 14:18:23.849146  188150 system_pods.go:89] "coredns-66bc5c9577-lrh58" [feb6ac32-bf93-4488-9574-cdc018d6c759] Running
	I1124 14:18:23.849168  188150 system_pods.go:89] "etcd-no-preload-444317" [53c3a0fc-8ca0-4f78-b868-7164585a0b4b] Running
	I1124 14:18:23.849189  188150 system_pods.go:89] "kindnet-zwxh6" [5314d14c-5c74-4df6-a25c-349e3ce92848] Running
	I1124 14:18:23.849226  188150 system_pods.go:89] "kube-apiserver-no-preload-444317" [46a06d31-31d6-4cb1-907b-1b57afa9d38d] Running
	I1124 14:18:23.849247  188150 system_pods.go:89] "kube-controller-manager-no-preload-444317" [23c76436-e21f-41f8-9e26-c1028d80c3fc] Running
	I1124 14:18:23.849271  188150 system_pods.go:89] "kube-proxy-m4fb4" [51e5fbb8-6216-4c92-a92e-a618ffdb2cf5] Running
	I1124 14:18:23.849303  188150 system_pods.go:89] "kube-scheduler-no-preload-444317" [283cab90-1b87-4ce1-8ea7-c7a0c42a13f6] Running
	I1124 14:18:23.849326  188150 system_pods.go:89] "storage-provisioner" [abff3443-a7cc-445c-94d1-a6e96ed61024] Running
	I1124 14:18:23.849348  188150 system_pods.go:126] duration metric: took 1.395125092s to wait for k8s-apps to be running ...
	I1124 14:18:23.849382  188150 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:18:23.849476  188150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:18:23.880572  188150 system_svc.go:56] duration metric: took 31.182186ms WaitForService to wait for kubelet
	I1124 14:18:23.880654  188150 kubeadm.go:587] duration metric: took 17.822981414s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:18:23.880687  188150 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:18:23.889344  188150 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:18:23.889422  188150 node_conditions.go:123] node cpu capacity is 2
	I1124 14:18:23.889451  188150 node_conditions.go:105] duration metric: took 8.722252ms to run NodePressure ...
	I1124 14:18:23.889497  188150 start.go:242] waiting for startup goroutines ...
	I1124 14:18:23.889522  188150 start.go:247] waiting for cluster config update ...
	I1124 14:18:23.889551  188150 start.go:256] writing updated cluster config ...
	I1124 14:18:23.889903  188150 ssh_runner.go:195] Run: rm -f paused
	I1124 14:18:23.895837  188150 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:18:23.900723  188150 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lrh58" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:23.910500  188150 pod_ready.go:94] pod "coredns-66bc5c9577-lrh58" is "Ready"
	I1124 14:18:23.910575  188150 pod_ready.go:86] duration metric: took 9.780692ms for pod "coredns-66bc5c9577-lrh58" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:23.913832  188150 pod_ready.go:83] waiting for pod "etcd-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:23.923128  188150 pod_ready.go:94] pod "etcd-no-preload-444317" is "Ready"
	I1124 14:18:23.923206  188150 pod_ready.go:86] duration metric: took 9.29066ms for pod "etcd-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:23.927554  188150 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:23.936947  188150 pod_ready.go:94] pod "kube-apiserver-no-preload-444317" is "Ready"
	I1124 14:18:23.937021  188150 pod_ready.go:86] duration metric: took 9.396303ms for pod "kube-apiserver-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:23.946280  188150 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:24.300570  188150 pod_ready.go:94] pod "kube-controller-manager-no-preload-444317" is "Ready"
	I1124 14:18:24.300596  188150 pod_ready.go:86] duration metric: took 354.243295ms for pod "kube-controller-manager-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:24.500718  188150 pod_ready.go:83] waiting for pod "kube-proxy-m4fb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:24.900050  188150 pod_ready.go:94] pod "kube-proxy-m4fb4" is "Ready"
	I1124 14:18:24.900073  188150 pod_ready.go:86] duration metric: took 399.327982ms for pod "kube-proxy-m4fb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:25.101066  188150 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:25.499989  188150 pod_ready.go:94] pod "kube-scheduler-no-preload-444317" is "Ready"
	I1124 14:18:25.500018  188150 pod_ready.go:86] duration metric: took 398.92604ms for pod "kube-scheduler-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:18:25.500032  188150 pod_ready.go:40] duration metric: took 1.604112911s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:18:25.556118  188150 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:18:25.561394  188150 out.go:179] * Done! kubectl is now configured to use "no-preload-444317" cluster and "default" namespace by default
	I1124 14:18:23.954313  191849 addons.go:530] duration metric: took 1.724915687s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 14:18:24.124691  191849 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-720293" context rescaled to 1 replicas
	W1124 14:18:25.624909  191849 node_ready.go:57] node "embed-certs-720293" has "Ready":"False" status (will retry)
	W1124 14:18:28.124803  191849 node_ready.go:57] node "embed-certs-720293" has "Ready":"False" status (will retry)
	W1124 14:18:30.624824  191849 node_ready.go:57] node "embed-certs-720293" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 24 14:18:22 no-preload-444317 crio[838]: time="2025-11-24T14:18:22.874964022Z" level=info msg="Created container c8ece08ff518b5f39e83efdf2ebb53ccd31cd8f3b41f55a376ab1a8b397a48cf: kube-system/coredns-66bc5c9577-lrh58/coredns" id=498bf761-f70c-412e-b92d-d935231cc0db name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:18:22 no-preload-444317 crio[838]: time="2025-11-24T14:18:22.880891536Z" level=info msg="Starting container: c8ece08ff518b5f39e83efdf2ebb53ccd31cd8f3b41f55a376ab1a8b397a48cf" id=e5027077-4ef3-41df-ab1b-fdd47430f944 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:18:22 no-preload-444317 crio[838]: time="2025-11-24T14:18:22.88574182Z" level=info msg="Started container" PID=2485 containerID=c8ece08ff518b5f39e83efdf2ebb53ccd31cd8f3b41f55a376ab1a8b397a48cf description=kube-system/coredns-66bc5c9577-lrh58/coredns id=e5027077-4ef3-41df-ab1b-fdd47430f944 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0067d24d056fcb334041ce4330b855209cfc2c9ac07f7fe22a5bce3d85f63cb
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.378274251Z" level=info msg="Running pod sandbox: default/busybox/POD" id=39191a60-8a7a-432d-9fd1-ffea11f67f4d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.378347072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.387443679Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1b92ba12628578976af15a41c355225cc16bd2a63c9815039b6f236aaf3ad251 UID:425d89e6-e7dd-4305-a272-badc5ebf1597 NetNS:/var/run/netns/6b06db78-fe7a-4300-9cd0-b5254ecd7549 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d9e8}] Aliases:map[]}"
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.38748267Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.399319294Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1b92ba12628578976af15a41c355225cc16bd2a63c9815039b6f236aaf3ad251 UID:425d89e6-e7dd-4305-a272-badc5ebf1597 NetNS:/var/run/netns/6b06db78-fe7a-4300-9cd0-b5254ecd7549 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d9e8}] Aliases:map[]}"
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.399527378Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.402858114Z" level=info msg="Ran pod sandbox 1b92ba12628578976af15a41c355225cc16bd2a63c9815039b6f236aaf3ad251 with infra container: default/busybox/POD" id=39191a60-8a7a-432d-9fd1-ffea11f67f4d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.414659832Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=db836c3f-9b16-43a6-9286-eb227b4dadec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.414822205Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=db836c3f-9b16-43a6-9286-eb227b4dadec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.41486767Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=db836c3f-9b16-43a6-9286-eb227b4dadec name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.416263804Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=296565e4-c92d-4ca0-931f-59ca4610088f name=/runtime.v1.ImageService/PullImage
	Nov 24 14:18:26 no-preload-444317 crio[838]: time="2025-11-24T14:18:26.41852764Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:18:28 no-preload-444317 crio[838]: time="2025-11-24T14:18:28.463059931Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=296565e4-c92d-4ca0-931f-59ca4610088f name=/runtime.v1.ImageService/PullImage
	Nov 24 14:18:28 no-preload-444317 crio[838]: time="2025-11-24T14:18:28.46370633Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bfa00206-dfe9-465e-84f5-02a32204e1f9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:18:28 no-preload-444317 crio[838]: time="2025-11-24T14:18:28.465156479Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=19092889-70dc-477d-a1a3-63a5ffec2517 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:18:28 no-preload-444317 crio[838]: time="2025-11-24T14:18:28.471509146Z" level=info msg="Creating container: default/busybox/busybox" id=51b25a73-ff02-4969-a5d8-3590f7e12401 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:18:28 no-preload-444317 crio[838]: time="2025-11-24T14:18:28.471640561Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:18:28 no-preload-444317 crio[838]: time="2025-11-24T14:18:28.477036254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:18:28 no-preload-444317 crio[838]: time="2025-11-24T14:18:28.477527607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:18:28 no-preload-444317 crio[838]: time="2025-11-24T14:18:28.493244428Z" level=info msg="Created container 4bc8a6f9d2b6a2ac9ec9d42a903f311edfeb3636c610d2ab1d03e14e8f16da06: default/busybox/busybox" id=51b25a73-ff02-4969-a5d8-3590f7e12401 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:18:28 no-preload-444317 crio[838]: time="2025-11-24T14:18:28.494199376Z" level=info msg="Starting container: 4bc8a6f9d2b6a2ac9ec9d42a903f311edfeb3636c610d2ab1d03e14e8f16da06" id=59cfd8ba-bdba-4bb2-a554-ed0f3378f9bb name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:18:28 no-preload-444317 crio[838]: time="2025-11-24T14:18:28.49590771Z" level=info msg="Started container" PID=2536 containerID=4bc8a6f9d2b6a2ac9ec9d42a903f311edfeb3636c610d2ab1d03e14e8f16da06 description=default/busybox/busybox id=59cfd8ba-bdba-4bb2-a554-ed0f3378f9bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b92ba12628578976af15a41c355225cc16bd2a63c9815039b6f236aaf3ad251
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4bc8a6f9d2b6a       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   1b92ba1262857       busybox                                     default
	c8ece08ff518b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   f0067d24d056f       coredns-66bc5c9577-lrh58                    kube-system
	6033453973ec2       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   cf9c2cecbc0ee       storage-provisioner                         kube-system
	af09f2cdfeb6a       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    26 seconds ago      Running             kindnet-cni               0                   04692b32b0bb5       kindnet-zwxh6                               kube-system
	27d19fd820af8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      31 seconds ago      Running             kube-proxy                0                   ae1a58f8936b5       kube-proxy-m4fb4                            kube-system
	b86ec04c3d2f1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      48 seconds ago      Running             etcd                      0                   b02eaf4d10ab6       etcd-no-preload-444317                      kube-system
	2bfed9b1c05ec       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      48 seconds ago      Running             kube-apiserver            0                   02860c57c9f0b       kube-apiserver-no-preload-444317            kube-system
	69d1802fce352       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      48 seconds ago      Running             kube-scheduler            0                   111eccd006e6a       kube-scheduler-no-preload-444317            kube-system
	6159298afc055       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      48 seconds ago      Running             kube-controller-manager   0                   576f020696276       kube-controller-manager-no-preload-444317   kube-system
	
	
	==> coredns [c8ece08ff518b5f39e83efdf2ebb53ccd31cd8f3b41f55a376ab1a8b397a48cf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35922 - 56995 "HINFO IN 2176244233526186640.5200035496218960858. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019385218s
	
	
	==> describe nodes <==
	Name:               no-preload-444317
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-444317
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=no-preload-444317
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_18_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-444317
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:18:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:18:32 +0000   Mon, 24 Nov 2025 14:17:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:18:32 +0000   Mon, 24 Nov 2025 14:17:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:18:32 +0000   Mon, 24 Nov 2025 14:17:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:18:32 +0000   Mon, 24 Nov 2025 14:18:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-444317
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                7f3cb54f-ba1b-4064-b92e-1b7768ad96c4
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-lrh58                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     32s
	  kube-system                 etcd-no-preload-444317                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-zwxh6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-no-preload-444317             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-no-preload-444317    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-m4fb4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-no-preload-444317             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Warning  CgroupV1                 49s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node no-preload-444317 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node no-preload-444317 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node no-preload-444317 status is now: NodeHasSufficientPID
	  Normal   Starting                 37s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 37s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node no-preload-444317 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node no-preload-444317 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s                kubelet          Node no-preload-444317 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           33s                node-controller  Node no-preload-444317 event: Registered Node no-preload-444317 in Controller
	  Normal   NodeReady                16s                kubelet          Node no-preload-444317 status is now: NodeReady
	
	
	==> dmesg <==
	[ +31.432146] overlayfs: idmapped layers are currently not supported
	[Nov24 13:53] overlayfs: idmapped layers are currently not supported
	[Nov24 13:54] overlayfs: idmapped layers are currently not supported
	[Nov24 13:56] overlayfs: idmapped layers are currently not supported
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b86ec04c3d2f17a46dc15cb0b62e646f63c67889e3cc557f80cd316784f8a881] <==
	{"level":"warn","ts":"2025-11-24T14:17:53.533603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:53.579974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:53.605534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:53.637055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:53.666998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:53.694451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:53.750014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:53.840312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:53.854051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:53.897142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:53.953510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:53.979060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.067713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.088122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.167443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.184179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.215553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.255570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.308113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.367323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.513888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.556644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.615430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.693838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:17:54.873969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51404","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:18:37 up  2:01,  0 user,  load average: 3.15, 3.05, 2.51
	Linux no-preload-444317 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [af09f2cdfeb6a550802196ae8278b4581a01ca3d2670846569a8bd8401a74287] <==
	I1124 14:18:11.313358       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:18:11.313761       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:18:11.313900       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:18:11.313920       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:18:11.313935       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:18:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:18:11.514680       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:18:11.514754       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:18:11.514788       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:18:11.514939       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 14:18:11.715329       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:18:11.715460       1 metrics.go:72] Registering metrics
	I1124 14:18:11.715544       1 controller.go:711] "Syncing nftables rules"
	I1124 14:18:21.521546       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:18:21.521601       1 main.go:301] handling current node
	I1124 14:18:31.515208       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:18:31.515242       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2bfed9b1c05ecaccfe423d8224c3d2cfa6e73a83e82f9383bc1d1d4801844471] <==
	I1124 14:17:57.022761       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:17:57.081279       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:17:57.116221       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:17:57.116690       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 14:17:57.193424       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:17:57.193581       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:17:57.210994       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 14:17:57.428061       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:17:57.456605       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:17:57.457627       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:17:58.964509       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:17:59.125561       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:17:59.284401       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:17:59.304214       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 14:17:59.305515       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:17:59.315723       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:17:59.630910       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:18:01.018874       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:18:01.125873       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:18:01.211705       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:18:05.284293       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 14:18:05.501973       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:18:05.890258       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:18:05.939190       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 14:18:35.898637       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:51674: use of closed network connection
	
	
	==> kube-controller-manager [6159298afc0556d9896cbc1845fad6d36c947648fe113a21259f68225f2a71b5] <==
	I1124 14:18:04.674906       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:18:04.675862       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:18:04.676555       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:18:04.683348       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:18:04.683499       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:18:04.683524       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 14:18:04.683539       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:18:04.699605       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:18:04.699651       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:18:04.702022       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:18:04.714357       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:18:04.717630       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:18:04.723758       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 14:18:04.724020       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:18:04.724068       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:18:04.724289       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:18:04.724328       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:18:04.724418       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:18:04.724426       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:18:04.724432       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:18:04.732161       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:18:04.732309       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:18:04.733703       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:18:04.744695       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-444317" podCIDRs=["10.244.0.0/24"]
	I1124 14:18:24.667161       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [27d19fd820af89972679f8eeb6d5a70ce45c895006673d8eca1beafd83a2554f] <==
	I1124 14:18:06.527810       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:18:06.696978       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:18:06.797509       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:18:06.797549       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:18:06.797622       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:18:06.906808       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:18:06.906915       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:18:06.911616       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:18:06.912602       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:18:06.912622       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:18:06.917870       1 config.go:200] "Starting service config controller"
	I1124 14:18:06.917888       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:18:06.917905       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:18:06.917909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:18:06.917919       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:18:06.917924       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:18:06.918528       1 config.go:309] "Starting node config controller"
	I1124 14:18:06.918537       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:18:06.918543       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:18:07.018898       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:18:07.018934       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:18:07.018980       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [69d1802fce3529fce0b586da8ee0ac6d60ce3d39e6fe19ec4457b0cf1bdf8e31] <==
	I1124 14:17:53.375443       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:18:00.100071       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:18:00.100193       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:18:00.107459       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:18:00.107957       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:18:00.115022       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:18:00.107915       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:18:00.115141       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:18:00.107973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:18:00.133753       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:18:00.107998       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:18:00.242708       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:18:00.261231       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:18:00.261975       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 24 14:18:04 no-preload-444317 kubelet[2000]: I1124 14:18:04.804566    2000 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 14:18:04 no-preload-444317 kubelet[2000]: I1124 14:18:04.805825    2000 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 14:18:05 no-preload-444317 kubelet[2000]: I1124 14:18:05.457600    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5314d14c-5c74-4df6-a25c-349e3ce92848-cni-cfg\") pod \"kindnet-zwxh6\" (UID: \"5314d14c-5c74-4df6-a25c-349e3ce92848\") " pod="kube-system/kindnet-zwxh6"
	Nov 24 14:18:05 no-preload-444317 kubelet[2000]: I1124 14:18:05.457662    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5314d14c-5c74-4df6-a25c-349e3ce92848-lib-modules\") pod \"kindnet-zwxh6\" (UID: \"5314d14c-5c74-4df6-a25c-349e3ce92848\") " pod="kube-system/kindnet-zwxh6"
	Nov 24 14:18:05 no-preload-444317 kubelet[2000]: I1124 14:18:05.457705    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51e5fbb8-6216-4c92-a92e-a618ffdb2cf5-kube-proxy\") pod \"kube-proxy-m4fb4\" (UID: \"51e5fbb8-6216-4c92-a92e-a618ffdb2cf5\") " pod="kube-system/kube-proxy-m4fb4"
	Nov 24 14:18:05 no-preload-444317 kubelet[2000]: I1124 14:18:05.457736    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51e5fbb8-6216-4c92-a92e-a618ffdb2cf5-xtables-lock\") pod \"kube-proxy-m4fb4\" (UID: \"51e5fbb8-6216-4c92-a92e-a618ffdb2cf5\") " pod="kube-system/kube-proxy-m4fb4"
	Nov 24 14:18:05 no-preload-444317 kubelet[2000]: I1124 14:18:05.457769    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lg97\" (UniqueName: \"kubernetes.io/projected/5314d14c-5c74-4df6-a25c-349e3ce92848-kube-api-access-6lg97\") pod \"kindnet-zwxh6\" (UID: \"5314d14c-5c74-4df6-a25c-349e3ce92848\") " pod="kube-system/kindnet-zwxh6"
	Nov 24 14:18:05 no-preload-444317 kubelet[2000]: I1124 14:18:05.457792    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5314d14c-5c74-4df6-a25c-349e3ce92848-xtables-lock\") pod \"kindnet-zwxh6\" (UID: \"5314d14c-5c74-4df6-a25c-349e3ce92848\") " pod="kube-system/kindnet-zwxh6"
	Nov 24 14:18:05 no-preload-444317 kubelet[2000]: I1124 14:18:05.457813    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51e5fbb8-6216-4c92-a92e-a618ffdb2cf5-lib-modules\") pod \"kube-proxy-m4fb4\" (UID: \"51e5fbb8-6216-4c92-a92e-a618ffdb2cf5\") " pod="kube-system/kube-proxy-m4fb4"
	Nov 24 14:18:05 no-preload-444317 kubelet[2000]: I1124 14:18:05.457837    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt42t\" (UniqueName: \"kubernetes.io/projected/51e5fbb8-6216-4c92-a92e-a618ffdb2cf5-kube-api-access-nt42t\") pod \"kube-proxy-m4fb4\" (UID: \"51e5fbb8-6216-4c92-a92e-a618ffdb2cf5\") " pod="kube-system/kube-proxy-m4fb4"
	Nov 24 14:18:05 no-preload-444317 kubelet[2000]: I1124 14:18:05.584584    2000 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:18:05 no-preload-444317 kubelet[2000]: W1124 14:18:05.701665    2000 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/crio-04692b32b0bb558addd520a35c0715828ef78b9f3ec207d8b6059c4e4c0b46e7 WatchSource:0}: Error finding container 04692b32b0bb558addd520a35c0715828ef78b9f3ec207d8b6059c4e4c0b46e7: Status 404 returned error can't find the container with id 04692b32b0bb558addd520a35c0715828ef78b9f3ec207d8b6059c4e4c0b46e7
	Nov 24 14:18:05 no-preload-444317 kubelet[2000]: W1124 14:18:05.818743    2000 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/crio-ae1a58f8936b5852a44d3df31b04a0c251d634b395af241c0fc448e0954ba171 WatchSource:0}: Error finding container ae1a58f8936b5852a44d3df31b04a0c251d634b395af241c0fc448e0954ba171: Status 404 returned error can't find the container with id ae1a58f8936b5852a44d3df31b04a0c251d634b395af241c0fc448e0954ba171
	Nov 24 14:18:07 no-preload-444317 kubelet[2000]: I1124 14:18:07.380324    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m4fb4" podStartSLOduration=2.380303715 podStartE2EDuration="2.380303715s" podCreationTimestamp="2025-11-24 14:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:18:07.380046875 +0000 UTC m=+6.543938199" watchObservedRunningTime="2025-11-24 14:18:07.380303715 +0000 UTC m=+6.544195023"
	Nov 24 14:18:11 no-preload-444317 kubelet[2000]: I1124 14:18:11.400521    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zwxh6" podStartSLOduration=1.059498468 podStartE2EDuration="6.400504547s" podCreationTimestamp="2025-11-24 14:18:05 +0000 UTC" firstStartedPulling="2025-11-24 14:18:05.715972084 +0000 UTC m=+4.879863392" lastFinishedPulling="2025-11-24 14:18:11.056978163 +0000 UTC m=+10.220869471" observedRunningTime="2025-11-24 14:18:11.398968727 +0000 UTC m=+10.562860051" watchObservedRunningTime="2025-11-24 14:18:11.400504547 +0000 UTC m=+10.564395855"
	Nov 24 14:18:21 no-preload-444317 kubelet[2000]: I1124 14:18:21.994661    2000 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 14:18:22 no-preload-444317 kubelet[2000]: I1124 14:18:22.243621    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/feb6ac32-bf93-4488-9574-cdc018d6c759-config-volume\") pod \"coredns-66bc5c9577-lrh58\" (UID: \"feb6ac32-bf93-4488-9574-cdc018d6c759\") " pod="kube-system/coredns-66bc5c9577-lrh58"
	Nov 24 14:18:22 no-preload-444317 kubelet[2000]: I1124 14:18:22.243680    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4httd\" (UniqueName: \"kubernetes.io/projected/feb6ac32-bf93-4488-9574-cdc018d6c759-kube-api-access-4httd\") pod \"coredns-66bc5c9577-lrh58\" (UID: \"feb6ac32-bf93-4488-9574-cdc018d6c759\") " pod="kube-system/coredns-66bc5c9577-lrh58"
	Nov 24 14:18:22 no-preload-444317 kubelet[2000]: I1124 14:18:22.243716    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/abff3443-a7cc-445c-94d1-a6e96ed61024-tmp\") pod \"storage-provisioner\" (UID: \"abff3443-a7cc-445c-94d1-a6e96ed61024\") " pod="kube-system/storage-provisioner"
	Nov 24 14:18:22 no-preload-444317 kubelet[2000]: I1124 14:18:22.243743    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hldd\" (UniqueName: \"kubernetes.io/projected/abff3443-a7cc-445c-94d1-a6e96ed61024-kube-api-access-4hldd\") pod \"storage-provisioner\" (UID: \"abff3443-a7cc-445c-94d1-a6e96ed61024\") " pod="kube-system/storage-provisioner"
	Nov 24 14:18:22 no-preload-444317 kubelet[2000]: W1124 14:18:22.775622    2000 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/crio-f0067d24d056fcb334041ce4330b855209cfc2c9ac07f7fe22a5bce3d85f63cb WatchSource:0}: Error finding container f0067d24d056fcb334041ce4330b855209cfc2c9ac07f7fe22a5bce3d85f63cb: Status 404 returned error can't find the container with id f0067d24d056fcb334041ce4330b855209cfc2c9ac07f7fe22a5bce3d85f63cb
	Nov 24 14:18:23 no-preload-444317 kubelet[2000]: I1124 14:18:23.493760    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lrh58" podStartSLOduration=18.49374103 podStartE2EDuration="18.49374103s" podCreationTimestamp="2025-11-24 14:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:18:23.464276196 +0000 UTC m=+22.628167521" watchObservedRunningTime="2025-11-24 14:18:23.49374103 +0000 UTC m=+22.657632338"
	Nov 24 14:18:25 no-preload-444317 kubelet[2000]: I1124 14:18:25.767254    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.767233774 podStartE2EDuration="17.767233774s" podCreationTimestamp="2025-11-24 14:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:18:23.550209536 +0000 UTC m=+22.714100852" watchObservedRunningTime="2025-11-24 14:18:25.767233774 +0000 UTC m=+24.931125081"
	Nov 24 14:18:25 no-preload-444317 kubelet[2000]: I1124 14:18:25.968799    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdcmk\" (UniqueName: \"kubernetes.io/projected/425d89e6-e7dd-4305-a272-badc5ebf1597-kube-api-access-xdcmk\") pod \"busybox\" (UID: \"425d89e6-e7dd-4305-a272-badc5ebf1597\") " pod="default/busybox"
	Nov 24 14:18:29 no-preload-444317 kubelet[2000]: I1124 14:18:29.452871    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.403331968 podStartE2EDuration="4.452853506s" podCreationTimestamp="2025-11-24 14:18:25 +0000 UTC" firstStartedPulling="2025-11-24 14:18:26.415048965 +0000 UTC m=+25.578940273" lastFinishedPulling="2025-11-24 14:18:28.464570503 +0000 UTC m=+27.628461811" observedRunningTime="2025-11-24 14:18:29.452642345 +0000 UTC m=+28.616533669" watchObservedRunningTime="2025-11-24 14:18:29.452853506 +0000 UTC m=+28.616744822"
	
	
	==> storage-provisioner [6033453973ec270d590c2db0e472959b219bfdf54a1000ef710d86fa3823c147] <==
	I1124 14:18:22.838595       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:18:23.020133       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:18:23.020274       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:18:23.023679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:23.035237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:18:23.035725       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:18:23.035959       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-444317_e179ca5b-707f-4df0-9082-0d5f457c2e8a!
	I1124 14:18:23.057987       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32afcba2-0797-489f-b777-85af3a10990a", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-444317_e179ca5b-707f-4df0-9082-0d5f457c2e8a became leader
	W1124 14:18:23.058501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:23.075785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:18:23.139065       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-444317_e179ca5b-707f-4df0-9082-0d5f457c2e8a!
	W1124 14:18:25.079183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:25.087236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:27.091109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:27.095588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:29.098647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:29.103578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:31.107061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:31.111733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:33.116233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:33.121005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:35.124727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:35.131726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:37.135826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:18:37.141669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-444317 -n no-preload-444317
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-444317 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (365.530599ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:19:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-720293 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-720293 describe deploy/metrics-server -n kube-system: exit status 1 (109.099999ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-720293 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-720293
helpers_test.go:243: (dbg) docker inspect embed-certs-720293:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b",
	        "Created": "2025-11-24T14:17:44.795163657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 192344,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:17:44.851778122Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/hostname",
	        "HostsPath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/hosts",
	        "LogPath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b-json.log",
	        "Name": "/embed-certs-720293",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-720293:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-720293",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b",
	                "LowerDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-720293",
	                "Source": "/var/lib/docker/volumes/embed-certs-720293/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-720293",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-720293",
	                "name.minikube.sigs.k8s.io": "embed-certs-720293",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "809b3ae6eace4ebe54dda29cc3c2565887ebf28fb5c77793f2e0949fdb6380a7",
	            "SandboxKey": "/var/run/docker/netns/809b3ae6eace",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-720293": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:40:45:e7:65:af",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c89ad55a017b9d150fec1f0d910c923b1dbfb234d3a49fcfd228e2952fc9581",
	                    "EndpointID": "9bda1da0967de10c2f9cba93f91423f07dcc791b8a123e3faa17c7db461dafb6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-720293",
	                        "70d00db6e782"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-720293 -n embed-certs-720293
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-720293 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-720293 logs -n 25: (1.282775862s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-928059                                                                                                                                                                                                                  │ force-systemd-flag-928059 │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-032076    │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p force-systemd-env-289577                                                                                                                                                                                                                   │ force-systemd-env-289577  │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:13 UTC │
	│ start   │ -p cert-options-097221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:13 UTC │ 24 Nov 25 14:14 UTC │
	│ ssh     │ cert-options-097221 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ ssh     │ -p cert-options-097221 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p cert-options-097221                                                                                                                                                                                                                        │ cert-options-097221       │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │                     │
	│ stop    │ -p old-k8s-version-706771 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:16 UTC │
	│ image   │ old-k8s-version-706771 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │ 24 Nov 25 14:16 UTC │
	│ pause   │ -p old-k8s-version-706771 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │                     │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-032076    │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771    │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317         │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:18 UTC │
	│ delete  │ -p cert-expiration-032076                                                                                                                                                                                                                     │ cert-expiration-032076    │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293        │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-444317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-444317         │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │                     │
	│ stop    │ -p no-preload-444317 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-444317         │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ addons  │ enable dashboard -p no-preload-444317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-444317         │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317         │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-720293        │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:18:50
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:18:50.723782  195877 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:18:50.723995  195877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:18:50.724026  195877 out.go:374] Setting ErrFile to fd 2...
	I1124 14:18:50.724049  195877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:18:50.724319  195877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:18:50.724722  195877 out.go:368] Setting JSON to false
	I1124 14:18:50.725700  195877 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7282,"bootTime":1763986649,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:18:50.725801  195877 start.go:143] virtualization:  
	I1124 14:18:50.728984  195877 out.go:179] * [no-preload-444317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:18:50.732956  195877 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:18:50.733087  195877 notify.go:221] Checking for updates...
	I1124 14:18:50.739102  195877 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:18:50.742117  195877 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:18:50.745050  195877 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:18:50.748108  195877 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:18:50.750959  195877 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:18:50.754438  195877 config.go:182] Loaded profile config "no-preload-444317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:18:50.755027  195877 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:18:50.782620  195877 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:18:50.782747  195877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:18:50.845577  195877 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:18:50.835829102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:18:50.845679  195877 docker.go:319] overlay module found
	I1124 14:18:50.848883  195877 out.go:179] * Using the docker driver based on existing profile
	I1124 14:18:50.851760  195877 start.go:309] selected driver: docker
	I1124 14:18:50.851780  195877 start.go:927] validating driver "docker" against &{Name:no-preload-444317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-444317 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:18:50.851884  195877 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:18:50.852635  195877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:18:50.930913  195877 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:18:50.920409248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:18:50.931270  195877 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:18:50.931298  195877 cni.go:84] Creating CNI manager for ""
	I1124 14:18:50.931524  195877 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:18:50.931588  195877 start.go:353] cluster config:
	{Name:no-preload-444317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-444317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:18:50.936807  195877 out.go:179] * Starting "no-preload-444317" primary control-plane node in "no-preload-444317" cluster
	I1124 14:18:50.939725  195877 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:18:50.942807  195877 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:18:50.945669  195877 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:18:50.945706  195877 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:18:50.945821  195877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/config.json ...
	I1124 14:18:50.946152  195877 cache.go:107] acquiring lock: {Name:mk06a07b8dd45e1ff8e8b54d11bbe4a10b5038a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:18:50.946228  195877 cache.go:115] /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 14:18:50.946235  195877 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.513µs
	I1124 14:18:50.946247  195877 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 14:18:50.946258  195877 cache.go:107] acquiring lock: {Name:mkdefec7ce8d21bfa14397bfca4a6ecc54ee8c29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:18:50.946289  195877 cache.go:115] /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1124 14:18:50.946294  195877 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 38.179µs
	I1124 14:18:50.946300  195877 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1124 14:18:50.946296  195877 cache.go:107] acquiring lock: {Name:mk90afc4ba8203bd73e73c54a46acc59bbaa6aa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:18:50.946327  195877 cache.go:107] acquiring lock: {Name:mkfcbabea9f76484fbd6d110e4cd20468278a4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:18:50.946359  195877 cache.go:115] /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1124 14:18:50.946364  195877 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 37.276µs
	I1124 14:18:50.946368  195877 cache.go:115] /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1124 14:18:50.946378  195877 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 88.609µs
	I1124 14:18:50.946386  195877 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1124 14:18:50.946378  195877 cache.go:107] acquiring lock: {Name:mk8a03e657128c4765f1084d255df5c22666f6de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:18:50.946394  195877 cache.go:107] acquiring lock: {Name:mka8a942d133705518eba6a0314003a291126bcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:18:50.946423  195877 cache.go:115] /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1124 14:18:50.946430  195877 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 52.324µs
	I1124 14:18:50.946436  195877 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1124 14:18:50.946423  195877 cache.go:115] /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1124 14:18:50.946447  195877 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 53.194µs
	I1124 14:18:50.946435  195877 cache.go:107] acquiring lock: {Name:mk2288117eed9dd24dec1d6cd5a5203a71d53a73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:18:50.946311  195877 cache.go:107] acquiring lock: {Name:mk327e5a8afc03c17e27a8624d340dabb273bb0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:18:50.946474  195877 cache.go:115] /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1124 14:18:50.946479  195877 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 45.646µs
	I1124 14:18:50.946483  195877 cache.go:115] /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1124 14:18:50.946486  195877 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1124 14:18:50.946453  195877 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1124 14:18:50.946370  195877 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1124 14:18:50.946489  195877 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 181.245µs
	I1124 14:18:50.946497  195877 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21932-2805/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 14:18:50.946503  195877 cache.go:87] Successfully saved all images to host disk.
	I1124 14:18:50.966692  195877 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:18:50.966714  195877 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:18:50.966731  195877 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:18:50.966759  195877 start.go:360] acquireMachinesLock for no-preload-444317: {Name:mkb080464ba5faf4046ee521f94000d9698fefe3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:18:50.966810  195877 start.go:364] duration metric: took 36.538µs to acquireMachinesLock for "no-preload-444317"
	I1124 14:18:50.966828  195877 start.go:96] Skipping create...Using existing machine configuration
	I1124 14:18:50.966834  195877 fix.go:54] fixHost starting: 
	I1124 14:18:50.967092  195877 cli_runner.go:164] Run: docker container inspect no-preload-444317 --format={{.State.Status}}
	I1124 14:18:50.984074  195877 fix.go:112] recreateIfNeeded on no-preload-444317: state=Stopped err=<nil>
	W1124 14:18:50.984103  195877 fix.go:138] unexpected machine state, will restart: <nil>
	W1124 14:18:49.624323  191849 node_ready.go:57] node "embed-certs-720293" has "Ready":"False" status (will retry)
	W1124 14:18:51.625495  191849 node_ready.go:57] node "embed-certs-720293" has "Ready":"False" status (will retry)
	I1124 14:18:50.987528  195877 out.go:252] * Restarting existing docker container for "no-preload-444317" ...
	I1124 14:18:50.987626  195877 cli_runner.go:164] Run: docker start no-preload-444317
	I1124 14:18:51.275434  195877 cli_runner.go:164] Run: docker container inspect no-preload-444317 --format={{.State.Status}}
	I1124 14:18:51.302605  195877 kic.go:430] container "no-preload-444317" state is running.
	I1124 14:18:51.304443  195877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-444317
	I1124 14:18:51.327656  195877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/config.json ...
	I1124 14:18:51.328054  195877 machine.go:94] provisionDockerMachine start ...
	I1124 14:18:51.328120  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:51.350911  195877 main.go:143] libmachine: Using SSH client type: native
	I1124 14:18:51.351225  195877 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:18:51.351233  195877 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:18:51.352118  195877 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:18:54.509592  195877 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-444317
	
	I1124 14:18:54.509637  195877 ubuntu.go:182] provisioning hostname "no-preload-444317"
	I1124 14:18:54.509720  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:54.529686  195877 main.go:143] libmachine: Using SSH client type: native
	I1124 14:18:54.529996  195877 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:18:54.530013  195877 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-444317 && echo "no-preload-444317" | sudo tee /etc/hostname
	I1124 14:18:54.693853  195877 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-444317
	
	I1124 14:18:54.693946  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:54.711933  195877 main.go:143] libmachine: Using SSH client type: native
	I1124 14:18:54.712253  195877 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:18:54.712276  195877 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-444317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-444317/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-444317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:18:54.868296  195877 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:18:54.868341  195877 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:18:54.868375  195877 ubuntu.go:190] setting up certificates
	I1124 14:18:54.868385  195877 provision.go:84] configureAuth start
	I1124 14:18:54.868450  195877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-444317
	I1124 14:18:54.887996  195877 provision.go:143] copyHostCerts
	I1124 14:18:54.888074  195877 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:18:54.888100  195877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:18:54.888189  195877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:18:54.888310  195877 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:18:54.888329  195877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:18:54.888362  195877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:18:54.888499  195877 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:18:54.888512  195877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:18:54.888542  195877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:18:54.888621  195877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.no-preload-444317 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-444317]
	I1124 14:18:55.256742  195877 provision.go:177] copyRemoteCerts
	I1124 14:18:55.256816  195877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:18:55.256862  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:55.273669  195877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/no-preload-444317/id_rsa Username:docker}
	I1124 14:18:55.380328  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:18:55.401305  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 14:18:55.419689  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:18:55.438372  195877 provision.go:87] duration metric: took 569.936717ms to configureAuth
	I1124 14:18:55.438438  195877 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:18:55.438634  195877 config.go:182] Loaded profile config "no-preload-444317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:18:55.438745  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:55.457696  195877 main.go:143] libmachine: Using SSH client type: native
	I1124 14:18:55.458008  195877 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:18:55.458026  195877 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:18:55.841033  195877 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:18:55.841099  195877 machine.go:97] duration metric: took 4.513032364s to provisionDockerMachine
	I1124 14:18:55.841127  195877 start.go:293] postStartSetup for "no-preload-444317" (driver="docker")
	I1124 14:18:55.841157  195877 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:18:55.841237  195877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:18:55.841301  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:55.864911  195877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/no-preload-444317/id_rsa Username:docker}
	I1124 14:18:55.975701  195877 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:18:55.979009  195877 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:18:55.979036  195877 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:18:55.979048  195877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:18:55.979103  195877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:18:55.979183  195877 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:18:55.979290  195877 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:18:55.986726  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:18:56.008136  195877 start.go:296] duration metric: took 166.974469ms for postStartSetup
	I1124 14:18:56.008226  195877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:18:56.008290  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:56.027940  195877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/no-preload-444317/id_rsa Username:docker}
	I1124 14:18:56.136491  195877 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:18:56.141588  195877 fix.go:56] duration metric: took 5.174746294s for fixHost
	I1124 14:18:56.141660  195877 start.go:83] releasing machines lock for "no-preload-444317", held for 5.174840884s
	I1124 14:18:56.141756  195877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-444317
	I1124 14:18:56.158662  195877 ssh_runner.go:195] Run: cat /version.json
	I1124 14:18:56.158687  195877 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:18:56.158722  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:56.158744  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:56.176742  195877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/no-preload-444317/id_rsa Username:docker}
	I1124 14:18:56.184271  195877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/no-preload-444317/id_rsa Username:docker}
	I1124 14:18:56.285157  195877 ssh_runner.go:195] Run: systemctl --version
	I1124 14:18:56.400948  195877 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:18:56.440008  195877 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:18:56.444684  195877 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:18:56.444783  195877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:18:56.455170  195877 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:18:56.455205  195877 start.go:496] detecting cgroup driver to use...
	I1124 14:18:56.455272  195877 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:18:56.455428  195877 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:18:56.470971  195877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:18:56.484009  195877 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:18:56.484093  195877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:18:56.499570  195877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:18:56.513108  195877 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:18:56.636094  195877 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:18:56.760002  195877 docker.go:234] disabling docker service ...
	I1124 14:18:56.760111  195877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:18:56.775117  195877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:18:56.787822  195877 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:18:56.926306  195877 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:18:57.063979  195877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:18:57.077473  195877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:18:57.093050  195877 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:18:57.093154  195877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:18:57.102446  195877 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:18:57.102553  195877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:18:57.111587  195877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:18:57.120487  195877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:18:57.131756  195877 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:18:57.139872  195877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:18:57.150333  195877 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:18:57.159166  195877 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:18:57.168535  195877 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:18:57.177548  195877 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:18:57.187651  195877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:18:57.320004  195877 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:18:57.506597  195877 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:18:57.506734  195877 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:18:57.510737  195877 start.go:564] Will wait 60s for crictl version
	I1124 14:18:57.510839  195877 ssh_runner.go:195] Run: which crictl
	I1124 14:18:57.514449  195877 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:18:57.539245  195877 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:18:57.539328  195877 ssh_runner.go:195] Run: crio --version
	I1124 14:18:57.570456  195877 ssh_runner.go:195] Run: crio --version
	I1124 14:18:57.602851  195877 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1124 14:18:54.124742  191849 node_ready.go:57] node "embed-certs-720293" has "Ready":"False" status (will retry)
	W1124 14:18:56.125393  191849 node_ready.go:57] node "embed-certs-720293" has "Ready":"False" status (will retry)
	I1124 14:18:57.605954  195877 cli_runner.go:164] Run: docker network inspect no-preload-444317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:18:57.621309  195877 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:18:57.625806  195877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:18:57.637471  195877 kubeadm.go:884] updating cluster {Name:no-preload-444317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-444317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:18:57.637588  195877 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:18:57.637630  195877 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:18:57.671487  195877 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:18:57.671513  195877 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:18:57.671521  195877 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 14:18:57.671642  195877 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-444317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-444317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:18:57.671733  195877 ssh_runner.go:195] Run: crio config
	I1124 14:18:57.747592  195877 cni.go:84] Creating CNI manager for ""
	I1124 14:18:57.747616  195877 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:18:57.747632  195877 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:18:57.747657  195877 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-444317 NodeName:no-preload-444317 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:18:57.747790  195877 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-444317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:18:57.747873  195877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:18:57.758479  195877 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:18:57.758547  195877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:18:57.765949  195877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 14:18:57.777727  195877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:18:57.789876  195877 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 14:18:57.801850  195877 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:18:57.805582  195877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:18:57.815084  195877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:18:57.924230  195877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:18:57.947873  195877 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317 for IP: 192.168.76.2
	I1124 14:18:57.947910  195877 certs.go:195] generating shared ca certs ...
	I1124 14:18:57.947927  195877 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:18:57.948130  195877 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:18:57.948203  195877 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:18:57.948217  195877 certs.go:257] generating profile certs ...
	I1124 14:18:57.948331  195877 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.key
	I1124 14:18:57.948444  195877 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/apiserver.key.cfbe8468
	I1124 14:18:57.948520  195877 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/proxy-client.key
	I1124 14:18:57.948666  195877 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:18:57.948727  195877 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:18:57.948746  195877 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:18:57.948791  195877 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:18:57.948839  195877 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:18:57.948898  195877 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:18:57.948970  195877 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:18:57.949645  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:18:57.972783  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:18:57.995922  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:18:58.022652  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:18:58.048157  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 14:18:58.073183  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:18:58.099902  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:18:58.121485  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:18:58.148433  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:18:58.167905  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:18:58.199726  195877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:18:58.222315  195877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:18:58.240413  195877 ssh_runner.go:195] Run: openssl version
	I1124 14:18:58.249395  195877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:18:58.259582  195877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:18:58.263328  195877 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:18:58.263417  195877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:18:58.307611  195877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:18:58.315693  195877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:18:58.324034  195877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:18:58.327816  195877 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:18:58.327937  195877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:18:58.369057  195877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:18:58.377879  195877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:18:58.386424  195877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:18:58.390146  195877 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:18:58.390211  195877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:18:58.431075  195877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:18:58.439651  195877 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:18:58.443786  195877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:18:58.485970  195877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:18:58.528856  195877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:18:58.581677  195877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:18:58.640443  195877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:18:58.721462  195877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:18:58.810451  195877 kubeadm.go:401] StartCluster: {Name:no-preload-444317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-444317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:18:58.810597  195877 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:18:58.810701  195877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:18:58.844388  195877 cri.go:89] found id: "6fb3ae76e7269290f5063bc5ecd82c590e464d07c67d0618070feb631692598d"
	I1124 14:18:58.844458  195877 cri.go:89] found id: "033ef6fd0ada365a2ecc235eed62496fe7b0a609cd2b260dacf36429246eb827"
	I1124 14:18:58.844479  195877 cri.go:89] found id: "838b8a2e9df2c5d45fa6bef18fa814af0df8f6efe64027561859a41453484af0"
	I1124 14:18:58.844510  195877 cri.go:89] found id: "9fae20cc90ea0e80a0e993b46f094ba9011120aee92f560781378c0ce54c97cb"
	I1124 14:18:58.844541  195877 cri.go:89] found id: ""
	I1124 14:18:58.844626  195877 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 14:18:58.860284  195877 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:18:58Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:18:58.860441  195877 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:18:58.870946  195877 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:18:58.871001  195877 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:18:58.871081  195877 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:18:58.880996  195877 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:18:58.881945  195877 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-444317" does not appear in /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:18:58.882586  195877 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-2805/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-444317" cluster setting kubeconfig missing "no-preload-444317" context setting]
	I1124 14:18:58.883502  195877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:18:58.885459  195877 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:18:58.903226  195877 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 14:18:58.903304  195877 kubeadm.go:602] duration metric: took 32.270934ms to restartPrimaryControlPlane
	I1124 14:18:58.903329  195877 kubeadm.go:403] duration metric: took 92.887507ms to StartCluster
	I1124 14:18:58.903396  195877 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:18:58.903497  195877 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:18:58.905116  195877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:18:58.905449  195877 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:18:58.905834  195877 config.go:182] Loaded profile config "no-preload-444317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:18:58.905917  195877 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:18:58.906044  195877 addons.go:70] Setting storage-provisioner=true in profile "no-preload-444317"
	I1124 14:18:58.906069  195877 addons.go:70] Setting default-storageclass=true in profile "no-preload-444317"
	I1124 14:18:58.906093  195877 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-444317"
	I1124 14:18:58.906075  195877 addons.go:239] Setting addon storage-provisioner=true in "no-preload-444317"
	W1124 14:18:58.906173  195877 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:18:58.906197  195877 host.go:66] Checking if "no-preload-444317" exists ...
	I1124 14:18:58.906393  195877 cli_runner.go:164] Run: docker container inspect no-preload-444317 --format={{.State.Status}}
	I1124 14:18:58.906753  195877 cli_runner.go:164] Run: docker container inspect no-preload-444317 --format={{.State.Status}}
	I1124 14:18:58.906058  195877 addons.go:70] Setting dashboard=true in profile "no-preload-444317"
	I1124 14:18:58.906966  195877 addons.go:239] Setting addon dashboard=true in "no-preload-444317"
	W1124 14:18:58.906974  195877 addons.go:248] addon dashboard should already be in state true
	I1124 14:18:58.907002  195877 host.go:66] Checking if "no-preload-444317" exists ...
	I1124 14:18:58.907523  195877 cli_runner.go:164] Run: docker container inspect no-preload-444317 --format={{.State.Status}}
	I1124 14:18:58.915400  195877 out.go:179] * Verifying Kubernetes components...
	I1124 14:18:58.918500  195877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:18:58.963436  195877 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:18:58.966892  195877 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:18:58.966914  195877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:18:58.966981  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:58.969909  195877 addons.go:239] Setting addon default-storageclass=true in "no-preload-444317"
	W1124 14:18:58.969931  195877 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:18:58.969958  195877 host.go:66] Checking if "no-preload-444317" exists ...
	I1124 14:18:58.970385  195877 cli_runner.go:164] Run: docker container inspect no-preload-444317 --format={{.State.Status}}
	I1124 14:18:58.988691  195877 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:18:58.994856  195877 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 14:18:58.999544  195877 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 14:18:58.999573  195877 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 14:18:58.999665  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:59.011009  195877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/no-preload-444317/id_rsa Username:docker}
	I1124 14:18:59.016182  195877 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:18:59.016203  195877 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:18:59.016266  195877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:18:59.058169  195877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/no-preload-444317/id_rsa Username:docker}
	I1124 14:18:59.066839  195877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/no-preload-444317/id_rsa Username:docker}
	I1124 14:18:59.257782  195877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:18:59.269516  195877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:18:59.365007  195877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:18:59.423932  195877 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 14:18:59.423962  195877 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 14:18:59.504122  195877 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 14:18:59.504200  195877 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 14:18:59.568452  195877 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 14:18:59.568473  195877 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 14:18:59.620128  195877 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 14:18:59.620190  195877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 14:18:59.654923  195877 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 14:18:59.654988  195877 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 14:18:59.672261  195877 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 14:18:59.672331  195877 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 14:18:59.692707  195877 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 14:18:59.692770  195877 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 14:18:59.717986  195877 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 14:18:59.718049  195877 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 14:18:59.737190  195877 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:18:59.737256  195877 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 14:18:59.752757  195877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1124 14:18:58.125983  191849 node_ready.go:57] node "embed-certs-720293" has "Ready":"False" status (will retry)
	W1124 14:19:00.627376  191849 node_ready.go:57] node "embed-certs-720293" has "Ready":"False" status (will retry)
	I1124 14:19:05.940315  195877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.682502069s)
	I1124 14:19:05.940367  195877 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.670833095s)
	I1124 14:19:05.940390  195877 node_ready.go:35] waiting up to 6m0s for node "no-preload-444317" to be "Ready" ...
	I1124 14:19:05.940693  195877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.575620961s)
	I1124 14:19:05.940933  195877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.188100401s)
	I1124 14:19:05.944435  195877 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-444317 addons enable metrics-server
	
	I1124 14:19:05.978839  195877 node_ready.go:49] node "no-preload-444317" is "Ready"
	I1124 14:19:05.978865  195877 node_ready.go:38] duration metric: took 38.463926ms for node "no-preload-444317" to be "Ready" ...
	I1124 14:19:05.978878  195877 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:19:05.978940  195877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:19:05.998456  195877 api_server.go:72] duration metric: took 7.09294197s to wait for apiserver process to appear ...
	I1124 14:19:05.998483  195877 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:19:05.998503  195877 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:19:06.004712  195877 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1124 14:19:03.125252  191849 node_ready.go:57] node "embed-certs-720293" has "Ready":"False" status (will retry)
	I1124 14:19:04.124783  191849 node_ready.go:49] node "embed-certs-720293" is "Ready"
	I1124 14:19:04.124818  191849 node_ready.go:38] duration metric: took 40.503308958s for node "embed-certs-720293" to be "Ready" ...
	I1124 14:19:04.124832  191849 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:19:04.124886  191849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:19:04.152445  191849 api_server.go:72] duration metric: took 41.923415953s to wait for apiserver process to appear ...
	I1124 14:19:04.152473  191849 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:19:04.152492  191849 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:19:04.171445  191849 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 14:19:04.172777  191849 api_server.go:141] control plane version: v1.34.1
	I1124 14:19:04.172801  191849 api_server.go:131] duration metric: took 20.320586ms to wait for apiserver health ...
	I1124 14:19:04.172811  191849 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:19:04.196388  191849 system_pods.go:59] 8 kube-system pods found
	I1124 14:19:04.196425  191849 system_pods.go:61] "coredns-66bc5c9577-6nztq" [9fbc8e0e-67a3-4086-aa1d-29b18f0c8d19] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:19:04.196433  191849 system_pods.go:61] "etcd-embed-certs-720293" [bc16ed26-fa6f-4c97-836c-c9f0b7f731aa] Running
	I1124 14:19:04.196439  191849 system_pods.go:61] "kindnet-ft88w" [7966e19b-c109-4372-8b9d-53d6f04dd7e7] Running
	I1124 14:19:04.196442  191849 system_pods.go:61] "kube-apiserver-embed-certs-720293" [8cdcb85e-986c-4ce2-b890-a8d96ea344c3] Running
	I1124 14:19:04.196447  191849 system_pods.go:61] "kube-controller-manager-embed-certs-720293" [9e5790d2-8178-4215-9b38-ffedd4359966] Running
	I1124 14:19:04.196451  191849 system_pods.go:61] "kube-proxy-pwpl4" [9404897b-5bae-4f03-987a-01e4ec7795a9] Running
	I1124 14:19:04.196454  191849 system_pods.go:61] "kube-scheduler-embed-certs-720293" [a289a525-28c8-45a8-a4e4-dde78e1ef777] Running
	I1124 14:19:04.196460  191849 system_pods.go:61] "storage-provisioner" [f6c6574d-20bd-49e9-86e6-b0d81b3490c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:19:04.196466  191849 system_pods.go:74] duration metric: took 23.648925ms to wait for pod list to return data ...
	I1124 14:19:04.196474  191849 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:19:04.218343  191849 default_sa.go:45] found service account: "default"
	I1124 14:19:04.218367  191849 default_sa.go:55] duration metric: took 21.887249ms for default service account to be created ...
	I1124 14:19:04.218377  191849 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:19:04.287240  191849 system_pods.go:86] 8 kube-system pods found
	I1124 14:19:04.287277  191849 system_pods.go:89] "coredns-66bc5c9577-6nztq" [9fbc8e0e-67a3-4086-aa1d-29b18f0c8d19] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:19:04.287284  191849 system_pods.go:89] "etcd-embed-certs-720293" [bc16ed26-fa6f-4c97-836c-c9f0b7f731aa] Running
	I1124 14:19:04.287301  191849 system_pods.go:89] "kindnet-ft88w" [7966e19b-c109-4372-8b9d-53d6f04dd7e7] Running
	I1124 14:19:04.287313  191849 system_pods.go:89] "kube-apiserver-embed-certs-720293" [8cdcb85e-986c-4ce2-b890-a8d96ea344c3] Running
	I1124 14:19:04.287318  191849 system_pods.go:89] "kube-controller-manager-embed-certs-720293" [9e5790d2-8178-4215-9b38-ffedd4359966] Running
	I1124 14:19:04.287322  191849 system_pods.go:89] "kube-proxy-pwpl4" [9404897b-5bae-4f03-987a-01e4ec7795a9] Running
	I1124 14:19:04.287333  191849 system_pods.go:89] "kube-scheduler-embed-certs-720293" [a289a525-28c8-45a8-a4e4-dde78e1ef777] Running
	I1124 14:19:04.287339  191849 system_pods.go:89] "storage-provisioner" [f6c6574d-20bd-49e9-86e6-b0d81b3490c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:19:04.287399  191849 retry.go:31] will retry after 250.648501ms: missing components: kube-dns
	I1124 14:19:04.543070  191849 system_pods.go:86] 8 kube-system pods found
	I1124 14:19:04.543112  191849 system_pods.go:89] "coredns-66bc5c9577-6nztq" [9fbc8e0e-67a3-4086-aa1d-29b18f0c8d19] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:19:04.543120  191849 system_pods.go:89] "etcd-embed-certs-720293" [bc16ed26-fa6f-4c97-836c-c9f0b7f731aa] Running
	I1124 14:19:04.543126  191849 system_pods.go:89] "kindnet-ft88w" [7966e19b-c109-4372-8b9d-53d6f04dd7e7] Running
	I1124 14:19:04.543131  191849 system_pods.go:89] "kube-apiserver-embed-certs-720293" [8cdcb85e-986c-4ce2-b890-a8d96ea344c3] Running
	I1124 14:19:04.543136  191849 system_pods.go:89] "kube-controller-manager-embed-certs-720293" [9e5790d2-8178-4215-9b38-ffedd4359966] Running
	I1124 14:19:04.543140  191849 system_pods.go:89] "kube-proxy-pwpl4" [9404897b-5bae-4f03-987a-01e4ec7795a9] Running
	I1124 14:19:04.543144  191849 system_pods.go:89] "kube-scheduler-embed-certs-720293" [a289a525-28c8-45a8-a4e4-dde78e1ef777] Running
	I1124 14:19:04.543148  191849 system_pods.go:89] "storage-provisioner" [f6c6574d-20bd-49e9-86e6-b0d81b3490c6] Running
	I1124 14:19:04.543158  191849 system_pods.go:126] duration metric: took 324.771669ms to wait for k8s-apps to be running ...
	I1124 14:19:04.543165  191849 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:19:04.543220  191849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:19:04.563046  191849 system_svc.go:56] duration metric: took 19.872121ms WaitForService to wait for kubelet
	I1124 14:19:04.563072  191849 kubeadm.go:587] duration metric: took 42.334047606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:19:04.563093  191849 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:19:04.566181  191849 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:19:04.566209  191849 node_conditions.go:123] node cpu capacity is 2
	I1124 14:19:04.566222  191849 node_conditions.go:105] duration metric: took 3.123602ms to run NodePressure ...
	I1124 14:19:04.566235  191849 start.go:242] waiting for startup goroutines ...
	I1124 14:19:04.566242  191849 start.go:247] waiting for cluster config update ...
	I1124 14:19:04.566254  191849 start.go:256] writing updated cluster config ...
	I1124 14:19:04.566517  191849 ssh_runner.go:195] Run: rm -f paused
	I1124 14:19:04.576221  191849 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:19:04.580174  191849 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6nztq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:05.586173  191849 pod_ready.go:94] pod "coredns-66bc5c9577-6nztq" is "Ready"
	I1124 14:19:05.586202  191849 pod_ready.go:86] duration metric: took 1.005966666s for pod "coredns-66bc5c9577-6nztq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:05.589470  191849 pod_ready.go:83] waiting for pod "etcd-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:05.597013  191849 pod_ready.go:94] pod "etcd-embed-certs-720293" is "Ready"
	I1124 14:19:05.597037  191849 pod_ready.go:86] duration metric: took 7.544138ms for pod "etcd-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:05.600199  191849 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:05.607792  191849 pod_ready.go:94] pod "kube-apiserver-embed-certs-720293" is "Ready"
	I1124 14:19:05.607857  191849 pod_ready.go:86] duration metric: took 7.63392ms for pod "kube-apiserver-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:05.612486  191849 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:05.784658  191849 pod_ready.go:94] pod "kube-controller-manager-embed-certs-720293" is "Ready"
	I1124 14:19:05.784734  191849 pod_ready.go:86] duration metric: took 172.144761ms for pod "kube-controller-manager-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:05.985299  191849 pod_ready.go:83] waiting for pod "kube-proxy-pwpl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:06.383443  191849 pod_ready.go:94] pod "kube-proxy-pwpl4" is "Ready"
	I1124 14:19:06.383473  191849 pod_ready.go:86] duration metric: took 398.150173ms for pod "kube-proxy-pwpl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:06.585220  191849 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:06.984033  191849 pod_ready.go:94] pod "kube-scheduler-embed-certs-720293" is "Ready"
	I1124 14:19:06.984073  191849 pod_ready.go:86] duration metric: took 398.830557ms for pod "kube-scheduler-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:06.984086  191849 pod_ready.go:40] duration metric: took 2.407834174s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:19:07.050845  191849 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:19:07.054624  191849 out.go:179] * Done! kubectl is now configured to use "embed-certs-720293" cluster and "default" namespace by default
	I1124 14:19:06.007656  195877 addons.go:530] duration metric: took 7.101727935s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 14:19:06.011424  195877 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:19:06.011463  195877 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:19:06.499041  195877 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:19:06.510263  195877 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 14:19:06.511630  195877 api_server.go:141] control plane version: v1.34.1
	I1124 14:19:06.511658  195877 api_server.go:131] duration metric: took 513.167932ms to wait for apiserver health ...
	I1124 14:19:06.511667  195877 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:19:06.516929  195877 system_pods.go:59] 8 kube-system pods found
	I1124 14:19:06.516963  195877 system_pods.go:61] "coredns-66bc5c9577-lrh58" [feb6ac32-bf93-4488-9574-cdc018d6c759] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:19:06.516972  195877 system_pods.go:61] "etcd-no-preload-444317" [53c3a0fc-8ca0-4f78-b868-7164585a0b4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:19:06.516976  195877 system_pods.go:61] "kindnet-zwxh6" [5314d14c-5c74-4df6-a25c-349e3ce92848] Running
	I1124 14:19:06.516984  195877 system_pods.go:61] "kube-apiserver-no-preload-444317" [46a06d31-31d6-4cb1-907b-1b57afa9d38d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:19:06.516991  195877 system_pods.go:61] "kube-controller-manager-no-preload-444317" [23c76436-e21f-41f8-9e26-c1028d80c3fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:19:06.516995  195877 system_pods.go:61] "kube-proxy-m4fb4" [51e5fbb8-6216-4c92-a92e-a618ffdb2cf5] Running
	I1124 14:19:06.517001  195877 system_pods.go:61] "kube-scheduler-no-preload-444317" [283cab90-1b87-4ce1-8ea7-c7a0c42a13f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:19:06.517008  195877 system_pods.go:61] "storage-provisioner" [abff3443-a7cc-445c-94d1-a6e96ed61024] Running
	I1124 14:19:06.517015  195877 system_pods.go:74] duration metric: took 5.341841ms to wait for pod list to return data ...
	I1124 14:19:06.517027  195877 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:19:06.521601  195877 default_sa.go:45] found service account: "default"
	I1124 14:19:06.521626  195877 default_sa.go:55] duration metric: took 4.592697ms for default service account to be created ...
	I1124 14:19:06.521637  195877 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:19:06.528260  195877 system_pods.go:86] 8 kube-system pods found
	I1124 14:19:06.528290  195877 system_pods.go:89] "coredns-66bc5c9577-lrh58" [feb6ac32-bf93-4488-9574-cdc018d6c759] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:19:06.528299  195877 system_pods.go:89] "etcd-no-preload-444317" [53c3a0fc-8ca0-4f78-b868-7164585a0b4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:19:06.528305  195877 system_pods.go:89] "kindnet-zwxh6" [5314d14c-5c74-4df6-a25c-349e3ce92848] Running
	I1124 14:19:06.528313  195877 system_pods.go:89] "kube-apiserver-no-preload-444317" [46a06d31-31d6-4cb1-907b-1b57afa9d38d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:19:06.528319  195877 system_pods.go:89] "kube-controller-manager-no-preload-444317" [23c76436-e21f-41f8-9e26-c1028d80c3fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:19:06.528327  195877 system_pods.go:89] "kube-proxy-m4fb4" [51e5fbb8-6216-4c92-a92e-a618ffdb2cf5] Running
	I1124 14:19:06.528333  195877 system_pods.go:89] "kube-scheduler-no-preload-444317" [283cab90-1b87-4ce1-8ea7-c7a0c42a13f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:19:06.528340  195877 system_pods.go:89] "storage-provisioner" [abff3443-a7cc-445c-94d1-a6e96ed61024] Running
	I1124 14:19:06.528348  195877 system_pods.go:126] duration metric: took 6.704654ms to wait for k8s-apps to be running ...
	I1124 14:19:06.528366  195877 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:19:06.528419  195877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:19:06.542876  195877 system_svc.go:56] duration metric: took 14.50102ms WaitForService to wait for kubelet
	I1124 14:19:06.542906  195877 kubeadm.go:587] duration metric: took 7.637399656s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:19:06.542934  195877 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:19:06.550112  195877 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:19:06.550147  195877 node_conditions.go:123] node cpu capacity is 2
	I1124 14:19:06.550163  195877 node_conditions.go:105] duration metric: took 7.223043ms to run NodePressure ...
	I1124 14:19:06.550176  195877 start.go:242] waiting for startup goroutines ...
	I1124 14:19:06.550184  195877 start.go:247] waiting for cluster config update ...
	I1124 14:19:06.550196  195877 start.go:256] writing updated cluster config ...
	I1124 14:19:06.550483  195877 ssh_runner.go:195] Run: rm -f paused
	I1124 14:19:06.555374  195877 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:19:06.561698  195877 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lrh58" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 14:19:08.582086  195877 pod_ready.go:104] pod "coredns-66bc5c9577-lrh58" is not "Ready", error: <nil>
	W1124 14:19:11.070779  195877 pod_ready.go:104] pod "coredns-66bc5c9577-lrh58" is not "Ready", error: <nil>
	W1124 14:19:13.568377  195877 pod_ready.go:104] pod "coredns-66bc5c9577-lrh58" is not "Ready", error: <nil>
	W1124 14:19:15.568954  195877 pod_ready.go:104] pod "coredns-66bc5c9577-lrh58" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 24 14:19:04 embed-certs-720293 crio[835]: time="2025-11-24T14:19:04.223819509Z" level=info msg="Created container 3da05df29f9ddd1f02d7c9c8c4ad95309d50e1d6d53a3c8b04e1a1904663e6ea: kube-system/coredns-66bc5c9577-6nztq/coredns" id=e60e4191-1219-4ad9-9cfc-d1a0c375f52a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:19:04 embed-certs-720293 crio[835]: time="2025-11-24T14:19:04.22509912Z" level=info msg="Starting container: 3da05df29f9ddd1f02d7c9c8c4ad95309d50e1d6d53a3c8b04e1a1904663e6ea" id=63f95498-f056-43d8-bca3-ea565a3b2eb4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:19:04 embed-certs-720293 crio[835]: time="2025-11-24T14:19:04.228049035Z" level=info msg="Started container" PID=1740 containerID=3da05df29f9ddd1f02d7c9c8c4ad95309d50e1d6d53a3c8b04e1a1904663e6ea description=kube-system/coredns-66bc5c9577-6nztq/coredns id=63f95498-f056-43d8-bca3-ea565a3b2eb4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dedb2dbedaae5663c5834a0b665da7e9b103b0fccff6221816a0e4ed36d8b7de
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.614127384Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1673881d-1396-45ad-b522-9306818dac18 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.614207607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.619443577Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fabc0f43528d74e19157b94608f06ced822bdefbfcee5ce69a7c0effeb396451 UID:512e7325-45b0-48e8-a89e-558464cf3040 NetNS:/var/run/netns/3e4e02e3-4fdb-4796-bdd0-14e5e10f1c49 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d640}] Aliases:map[]}"
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.619615427Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.632362894Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fabc0f43528d74e19157b94608f06ced822bdefbfcee5ce69a7c0effeb396451 UID:512e7325-45b0-48e8-a89e-558464cf3040 NetNS:/var/run/netns/3e4e02e3-4fdb-4796-bdd0-14e5e10f1c49 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d640}] Aliases:map[]}"
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.632532873Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.639050932Z" level=info msg="Ran pod sandbox fabc0f43528d74e19157b94608f06ced822bdefbfcee5ce69a7c0effeb396451 with infra container: default/busybox/POD" id=1673881d-1396-45ad-b522-9306818dac18 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.641255362Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a3ca128e-7ce2-4219-ae71-277d956e54f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.641523664Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a3ca128e-7ce2-4219-ae71-277d956e54f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.641597897Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a3ca128e-7ce2-4219-ae71-277d956e54f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.65110958Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=646c1ff7-e3d9-4400-8602-ee089a59d7cc name=/runtime.v1.ImageService/PullImage
	Nov 24 14:19:07 embed-certs-720293 crio[835]: time="2025-11-24T14:19:07.652553993Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:19:09 embed-certs-720293 crio[835]: time="2025-11-24T14:19:09.877524942Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=646c1ff7-e3d9-4400-8602-ee089a59d7cc name=/runtime.v1.ImageService/PullImage
	Nov 24 14:19:09 embed-certs-720293 crio[835]: time="2025-11-24T14:19:09.878605593Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7749c933-ed10-47f3-b145-dc56d61e54d3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:19:09 embed-certs-720293 crio[835]: time="2025-11-24T14:19:09.880516293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=175fec62-839a-4945-a576-945bc9fa793a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:19:09 embed-certs-720293 crio[835]: time="2025-11-24T14:19:09.887518804Z" level=info msg="Creating container: default/busybox/busybox" id=77d9d184-1ae7-414b-b69f-60ad9e7bfa79 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:19:09 embed-certs-720293 crio[835]: time="2025-11-24T14:19:09.887739081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:19:09 embed-certs-720293 crio[835]: time="2025-11-24T14:19:09.897228864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:19:09 embed-certs-720293 crio[835]: time="2025-11-24T14:19:09.89815877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:19:09 embed-certs-720293 crio[835]: time="2025-11-24T14:19:09.915517642Z" level=info msg="Created container 277783de63cf98e9133fd4bfa4161259d94356a65a2f7165e55c5e8cf8912e8c: default/busybox/busybox" id=77d9d184-1ae7-414b-b69f-60ad9e7bfa79 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:19:09 embed-certs-720293 crio[835]: time="2025-11-24T14:19:09.91658368Z" level=info msg="Starting container: 277783de63cf98e9133fd4bfa4161259d94356a65a2f7165e55c5e8cf8912e8c" id=10dc1a45-e9fb-4f63-a42d-db5380405669 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:19:09 embed-certs-720293 crio[835]: time="2025-11-24T14:19:09.918342894Z" level=info msg="Started container" PID=1799 containerID=277783de63cf98e9133fd4bfa4161259d94356a65a2f7165e55c5e8cf8912e8c description=default/busybox/busybox id=10dc1a45-e9fb-4f63-a42d-db5380405669 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fabc0f43528d74e19157b94608f06ced822bdefbfcee5ce69a7c0effeb396451
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	277783de63cf9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   fabc0f43528d7       busybox                                      default
	3da05df29f9dd       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 seconds ago       Running             coredns                   0                   dedb2dbedaae5       coredns-66bc5c9577-6nztq                     kube-system
	bd49a97379a1a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago       Running             storage-provisioner       0                   0e877a46b1a1d       storage-provisioner                          kube-system
	7f9ebc4aec139       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      56 seconds ago       Running             kube-proxy                0                   f0b9595a6f065       kube-proxy-pwpl4                             kube-system
	78a0434f5f4ec       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      56 seconds ago       Running             kindnet-cni               0                   73fc234fa1507       kindnet-ft88w                                kube-system
	1fc04b391bf58       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   0857d77c8a2d0       kube-scheduler-embed-certs-720293            kube-system
	5de1b30dd94ff       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   945edc163249e       kube-apiserver-embed-certs-720293            kube-system
	9abedf0e775af       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   827bb7c75be1b       kube-controller-manager-embed-certs-720293   kube-system
	0e95d44961900       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   31da46cc74796       etcd-embed-certs-720293                      kube-system
	
	
	==> coredns [3da05df29f9ddd1f02d7c9c8c4ad95309d50e1d6d53a3c8b04e1a1904663e6ea] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45711 - 58308 "HINFO IN 2549784168976822839.7368226478010022399. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.045433034s
	
	
	==> describe nodes <==
	Name:               embed-certs-720293
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-720293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=embed-certs-720293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_18_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:18:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-720293
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:19:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:19:18 +0000   Mon, 24 Nov 2025 14:18:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:19:18 +0000   Mon, 24 Nov 2025 14:18:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:19:18 +0000   Mon, 24 Nov 2025 14:18:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:19:18 +0000   Mon, 24 Nov 2025 14:19:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-720293
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                f982cc7c-133c-414c-b480-dd4b30e870c6
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-6nztq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-embed-certs-720293                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-ft88w                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-embed-certs-720293             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-720293    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-pwpl4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-embed-certs-720293             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   Starting                 71s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node embed-certs-720293 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node embed-certs-720293 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node embed-certs-720293 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node embed-certs-720293 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node embed-certs-720293 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node embed-certs-720293 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node embed-certs-720293 event: Registered Node embed-certs-720293 in Controller
	  Normal   NodeReady                16s                kubelet          Node embed-certs-720293 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 13:53] overlayfs: idmapped layers are currently not supported
	[Nov24 13:54] overlayfs: idmapped layers are currently not supported
	[Nov24 13:56] overlayfs: idmapped layers are currently not supported
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	[ +49.916713] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0e95d4496190021bc482318464757d9a6048b00753620e59f2c529e3d30d1bc4] <==
	{"level":"warn","ts":"2025-11-24T14:18:12.785583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:12.818344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:12.830045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:12.879868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:12.902113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:12.920216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:12.930901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:12.951260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:12.966705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:12.984737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.015227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.029095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.049080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.063458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.085795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.102454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.116786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.134957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.152332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.169149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.202032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.226742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.242503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.259624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:18:13.332094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37576","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:19:19 up  2:01,  0 user,  load average: 2.88, 2.99, 2.51
	Linux embed-certs-720293 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [78a0434f5f4ec8f1d54a36059c2658ddd22368c4e94d0dd2d023606a4454d532] <==
	I1124 14:18:23.112461       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:18:23.112733       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:18:23.112861       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:18:23.112873       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:18:23.112884       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:18:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:18:23.328459       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:18:23.328491       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:18:23.328501       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:18:23.329277       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:18:53.329470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:18:53.329476       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:18:53.329604       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:18:53.329733       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:18:54.729477       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:18:54.729510       1 metrics.go:72] Registering metrics
	I1124 14:18:54.729576       1 controller.go:711] "Syncing nftables rules"
	I1124 14:19:03.333452       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:19:03.333561       1 main.go:301] handling current node
	I1124 14:19:13.327443       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:19:13.327606       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5de1b30dd94ff36a4ea2c42e9880dd940c9ad7150091f26e20c267a261f9f0a2] <==
	I1124 14:18:14.244812       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 14:18:14.249841       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:18:14.256113       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 14:18:14.275151       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:18:14.275722       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:18:14.275844       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:18:14.288319       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:18:14.936260       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:18:14.942772       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:18:14.942796       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:18:15.784002       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:18:15.845013       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:18:15.959929       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:18:15.967718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 14:18:15.968834       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:18:15.974217       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:18:16.142352       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:18:17.098021       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:18:17.115051       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:18:17.131634       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:18:21.350702       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:18:21.355304       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:18:21.847107       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 14:18:22.128936       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1124 14:19:17.506077       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:35280: use of closed network connection
	
	
	==> kube-controller-manager [9abedf0e775aff7ea1ea4a970583d8ab26a95c70ecd8d0dfdca47533907ebbf1] <==
	I1124 14:18:21.192199       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 14:18:21.192237       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 14:18:21.192245       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 14:18:21.192368       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:18:21.192449       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:18:21.193014       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:18:21.192225       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 14:18:21.192453       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 14:18:21.196738       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 14:18:21.196795       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 14:18:21.196841       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:18:21.196853       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:18:21.196861       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:18:21.199551       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 14:18:21.199666       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:18:21.199744       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-720293"
	I1124 14:18:21.199791       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 14:18:21.199829       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:18:21.202825       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:18:21.203005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:18:21.207698       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-720293" podCIDRs=["10.244.0.0/24"]
	I1124 14:18:21.219129       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:18:21.239486       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:18:21.250077       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:19:06.207475       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7f9ebc4aec1395a4acf9d9bb1c0ebf69fb001f5201e297e7b05914ef53392015] <==
	I1124 14:18:23.134775       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:18:23.277056       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:18:23.391826       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:18:23.391886       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:18:23.392019       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:18:23.505305       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:18:23.505353       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:18:23.528998       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:18:23.530645       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:18:23.550894       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:18:23.552308       1 config.go:200] "Starting service config controller"
	I1124 14:18:23.552336       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:18:23.552354       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:18:23.552364       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:18:23.552377       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:18:23.552382       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:18:23.552998       1 config.go:309] "Starting node config controller"
	I1124 14:18:23.553012       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:18:23.553019       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:18:23.653110       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:18:23.653160       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:18:23.653177       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1fc04b391bf58f3f9f4b14f5d9ef924825e957f7c5e225d2f5338d090f08640b] <==
	E1124 14:18:14.238857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 14:18:14.239013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 14:18:14.240764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:18:14.240943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:18:14.241770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:18:14.241905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 14:18:14.242029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 14:18:14.242134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:18:14.242276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:18:14.242373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:18:14.242607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 14:18:14.242736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:18:14.242881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:18:14.242958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:18:14.252014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 14:18:15.059093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 14:18:15.074272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 14:18:15.118710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:18:15.238126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 14:18:15.276096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:18:15.306767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 14:18:15.396682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:18:15.440390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:18:15.441673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1124 14:18:17.284296       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:18:21 embed-certs-720293 kubelet[1318]: I1124 14:18:21.900728    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9404897b-5bae-4f03-987a-01e4ec7795a9-lib-modules\") pod \"kube-proxy-pwpl4\" (UID: \"9404897b-5bae-4f03-987a-01e4ec7795a9\") " pod="kube-system/kube-proxy-pwpl4"
	Nov 24 14:18:21 embed-certs-720293 kubelet[1318]: I1124 14:18:21.900819    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7966e19b-c109-4372-8b9d-53d6f04dd7e7-cni-cfg\") pod \"kindnet-ft88w\" (UID: \"7966e19b-c109-4372-8b9d-53d6f04dd7e7\") " pod="kube-system/kindnet-ft88w"
	Nov 24 14:18:21 embed-certs-720293 kubelet[1318]: I1124 14:18:21.900923    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7966e19b-c109-4372-8b9d-53d6f04dd7e7-xtables-lock\") pod \"kindnet-ft88w\" (UID: \"7966e19b-c109-4372-8b9d-53d6f04dd7e7\") " pod="kube-system/kindnet-ft88w"
	Nov 24 14:18:21 embed-certs-720293 kubelet[1318]: I1124 14:18:21.901015    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp842\" (UniqueName: \"kubernetes.io/projected/7966e19b-c109-4372-8b9d-53d6f04dd7e7-kube-api-access-jp842\") pod \"kindnet-ft88w\" (UID: \"7966e19b-c109-4372-8b9d-53d6f04dd7e7\") " pod="kube-system/kindnet-ft88w"
	Nov 24 14:18:21 embed-certs-720293 kubelet[1318]: I1124 14:18:21.901104    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6szlp\" (UniqueName: \"kubernetes.io/projected/9404897b-5bae-4f03-987a-01e4ec7795a9-kube-api-access-6szlp\") pod \"kube-proxy-pwpl4\" (UID: \"9404897b-5bae-4f03-987a-01e4ec7795a9\") " pod="kube-system/kube-proxy-pwpl4"
	Nov 24 14:18:21 embed-certs-720293 kubelet[1318]: I1124 14:18:21.901186    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7966e19b-c109-4372-8b9d-53d6f04dd7e7-lib-modules\") pod \"kindnet-ft88w\" (UID: \"7966e19b-c109-4372-8b9d-53d6f04dd7e7\") " pod="kube-system/kindnet-ft88w"
	Nov 24 14:18:22 embed-certs-720293 kubelet[1318]: E1124 14:18:22.043910    1318 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 14:18:22 embed-certs-720293 kubelet[1318]: E1124 14:18:22.043955    1318 projected.go:196] Error preparing data for projected volume kube-api-access-jp842 for pod kube-system/kindnet-ft88w: configmap "kube-root-ca.crt" not found
	Nov 24 14:18:22 embed-certs-720293 kubelet[1318]: E1124 14:18:22.044038    1318 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7966e19b-c109-4372-8b9d-53d6f04dd7e7-kube-api-access-jp842 podName:7966e19b-c109-4372-8b9d-53d6f04dd7e7 nodeName:}" failed. No retries permitted until 2025-11-24 14:18:22.544012248 +0000 UTC m=+5.609846136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jp842" (UniqueName: "kubernetes.io/projected/7966e19b-c109-4372-8b9d-53d6f04dd7e7-kube-api-access-jp842") pod "kindnet-ft88w" (UID: "7966e19b-c109-4372-8b9d-53d6f04dd7e7") : configmap "kube-root-ca.crt" not found
	Nov 24 14:18:22 embed-certs-720293 kubelet[1318]: E1124 14:18:22.082919    1318 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 14:18:22 embed-certs-720293 kubelet[1318]: E1124 14:18:22.082967    1318 projected.go:196] Error preparing data for projected volume kube-api-access-6szlp for pod kube-system/kube-proxy-pwpl4: configmap "kube-root-ca.crt" not found
	Nov 24 14:18:22 embed-certs-720293 kubelet[1318]: E1124 14:18:22.083048    1318 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9404897b-5bae-4f03-987a-01e4ec7795a9-kube-api-access-6szlp podName:9404897b-5bae-4f03-987a-01e4ec7795a9 nodeName:}" failed. No retries permitted until 2025-11-24 14:18:22.58301371 +0000 UTC m=+5.648847606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6szlp" (UniqueName: "kubernetes.io/projected/9404897b-5bae-4f03-987a-01e4ec7795a9-kube-api-access-6szlp") pod "kube-proxy-pwpl4" (UID: "9404897b-5bae-4f03-987a-01e4ec7795a9") : configmap "kube-root-ca.crt" not found
	Nov 24 14:18:22 embed-certs-720293 kubelet[1318]: I1124 14:18:22.614215    1318 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:18:23 embed-certs-720293 kubelet[1318]: I1124 14:18:23.282655    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ft88w" podStartSLOduration=2.282625847 podStartE2EDuration="2.282625847s" podCreationTimestamp="2025-11-24 14:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:18:23.241599182 +0000 UTC m=+6.307433078" watchObservedRunningTime="2025-11-24 14:18:23.282625847 +0000 UTC m=+6.348459743"
	Nov 24 14:18:23 embed-certs-720293 kubelet[1318]: I1124 14:18:23.931472    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pwpl4" podStartSLOduration=2.9314496439999997 podStartE2EDuration="2.931449644s" podCreationTimestamp="2025-11-24 14:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:18:23.283116429 +0000 UTC m=+6.348950333" watchObservedRunningTime="2025-11-24 14:18:23.931449644 +0000 UTC m=+6.997283540"
	Nov 24 14:19:03 embed-certs-720293 kubelet[1318]: I1124 14:19:03.697860    1318 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 14:19:03 embed-certs-720293 kubelet[1318]: I1124 14:19:03.808472    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fbc8e0e-67a3-4086-aa1d-29b18f0c8d19-config-volume\") pod \"coredns-66bc5c9577-6nztq\" (UID: \"9fbc8e0e-67a3-4086-aa1d-29b18f0c8d19\") " pod="kube-system/coredns-66bc5c9577-6nztq"
	Nov 24 14:19:03 embed-certs-720293 kubelet[1318]: I1124 14:19:03.808788    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fnbl\" (UniqueName: \"kubernetes.io/projected/9fbc8e0e-67a3-4086-aa1d-29b18f0c8d19-kube-api-access-9fnbl\") pod \"coredns-66bc5c9577-6nztq\" (UID: \"9fbc8e0e-67a3-4086-aa1d-29b18f0c8d19\") " pod="kube-system/coredns-66bc5c9577-6nztq"
	Nov 24 14:19:03 embed-certs-720293 kubelet[1318]: I1124 14:19:03.808840    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6c6574d-20bd-49e9-86e6-b0d81b3490c6-tmp\") pod \"storage-provisioner\" (UID: \"f6c6574d-20bd-49e9-86e6-b0d81b3490c6\") " pod="kube-system/storage-provisioner"
	Nov 24 14:19:03 embed-certs-720293 kubelet[1318]: I1124 14:19:03.808860    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48jkp\" (UniqueName: \"kubernetes.io/projected/f6c6574d-20bd-49e9-86e6-b0d81b3490c6-kube-api-access-48jkp\") pod \"storage-provisioner\" (UID: \"f6c6574d-20bd-49e9-86e6-b0d81b3490c6\") " pod="kube-system/storage-provisioner"
	Nov 24 14:19:04 embed-certs-720293 kubelet[1318]: W1124 14:19:04.109084    1318 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/crio-dedb2dbedaae5663c5834a0b665da7e9b103b0fccff6221816a0e4ed36d8b7de WatchSource:0}: Error finding container dedb2dbedaae5663c5834a0b665da7e9b103b0fccff6221816a0e4ed36d8b7de: Status 404 returned error can't find the container with id dedb2dbedaae5663c5834a0b665da7e9b103b0fccff6221816a0e4ed36d8b7de
	Nov 24 14:19:04 embed-certs-720293 kubelet[1318]: I1124 14:19:04.391205    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.391183771 podStartE2EDuration="41.391183771s" podCreationTimestamp="2025-11-24 14:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:19:04.334749709 +0000 UTC m=+47.400583605" watchObservedRunningTime="2025-11-24 14:19:04.391183771 +0000 UTC m=+47.457017675"
	Nov 24 14:19:05 embed-certs-720293 kubelet[1318]: I1124 14:19:05.324496    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6nztq" podStartSLOduration=43.324475095 podStartE2EDuration="43.324475095s" podCreationTimestamp="2025-11-24 14:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:19:04.392861787 +0000 UTC m=+47.458695716" watchObservedRunningTime="2025-11-24 14:19:05.324475095 +0000 UTC m=+48.390308990"
	Nov 24 14:19:07 embed-certs-720293 kubelet[1318]: I1124 14:19:07.330735    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swjhs\" (UniqueName: \"kubernetes.io/projected/512e7325-45b0-48e8-a89e-558464cf3040-kube-api-access-swjhs\") pod \"busybox\" (UID: \"512e7325-45b0-48e8-a89e-558464cf3040\") " pod="default/busybox"
	Nov 24 14:19:07 embed-certs-720293 kubelet[1318]: W1124 14:19:07.637908    1318 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/crio-fabc0f43528d74e19157b94608f06ced822bdefbfcee5ce69a7c0effeb396451 WatchSource:0}: Error finding container fabc0f43528d74e19157b94608f06ced822bdefbfcee5ce69a7c0effeb396451: Status 404 returned error can't find the container with id fabc0f43528d74e19157b94608f06ced822bdefbfcee5ce69a7c0effeb396451
	
	
	==> storage-provisioner [bd49a97379a1a1570c5a1a5cef7f4cff2f04530331d3991ad32864e28c8610c0] <==
	I1124 14:19:04.257521       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:19:04.371025       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:19:04.371172       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:19:04.391983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:04.413385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:19:04.413644       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:19:04.413865       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-720293_b9026293-9fc2-43a0-91e7-54f67fd82a72!
	I1124 14:19:04.421054       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b678df36-62b8-4640-a341-449d1c1095fb", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-720293_b9026293-9fc2-43a0-91e7-54f67fd82a72 became leader
	W1124 14:19:04.439533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:04.458568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:19:04.514767       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-720293_b9026293-9fc2-43a0-91e7-54f67fd82a72!
	W1124 14:19:06.461984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:06.469589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:08.473129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:08.480801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:10.487683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:10.492404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:12.495736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:12.505306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:14.508730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:14.513411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:16.517271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:16.526337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:18.530457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:18.545378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-720293 -n embed-certs-720293
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-720293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-444317 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-444317 --alsologtostderr -v=1: exit status 80 (2.25494986s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-444317 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:19:52.021985  200822 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:19:52.022304  200822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:19:52.022328  200822 out.go:374] Setting ErrFile to fd 2...
	I1124 14:19:52.022348  200822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:19:52.022657  200822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:19:52.022944  200822 out.go:368] Setting JSON to false
	I1124 14:19:52.022984  200822 mustload.go:66] Loading cluster: no-preload-444317
	I1124 14:19:52.023518  200822 config.go:182] Loaded profile config "no-preload-444317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:19:52.024257  200822 cli_runner.go:164] Run: docker container inspect no-preload-444317 --format={{.State.Status}}
	I1124 14:19:52.050721  200822 host.go:66] Checking if "no-preload-444317" exists ...
	I1124 14:19:52.051056  200822 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:19:52.181008  200822 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-24 14:19:52.168779235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:19:52.181704  200822 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-444317 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 14:19:52.186409  200822 out.go:179] * Pausing node no-preload-444317 ... 
	I1124 14:19:52.190930  200822 host.go:66] Checking if "no-preload-444317" exists ...
	I1124 14:19:52.191285  200822 ssh_runner.go:195] Run: systemctl --version
	I1124 14:19:52.191326  200822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-444317
	I1124 14:19:52.225585  200822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/no-preload-444317/id_rsa Username:docker}
	I1124 14:19:52.378493  200822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:19:52.396745  200822 pause.go:52] kubelet running: true
	I1124 14:19:52.396820  200822 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:19:52.702751  200822 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:19:52.702832  200822 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:19:52.785486  200822 cri.go:89] found id: "7500421b8d518959966543c2fb44123cf1e925d09b9f3a19358de4f5ccaf03f5"
	I1124 14:19:52.785509  200822 cri.go:89] found id: "91d70065b14a06021db8c9a017b68c7833b9f540e25841cd0422a6eac3a15b51"
	I1124 14:19:52.785514  200822 cri.go:89] found id: "b98558ac51e4ca540f3cbbc6b2f05fe5584b13c8eb1c8764289a95ecdde989f6"
	I1124 14:19:52.785518  200822 cri.go:89] found id: "9edacfe959f782420519cd918af883fab72dba651cea4c1003317aa7dbb5aee2"
	I1124 14:19:52.785531  200822 cri.go:89] found id: "1c3203cd2d06f35ce87a959266c2a7517112b74ce7421df704dc7f717e2c1e12"
	I1124 14:19:52.785544  200822 cri.go:89] found id: "6fb3ae76e7269290f5063bc5ecd82c590e464d07c67d0618070feb631692598d"
	I1124 14:19:52.785550  200822 cri.go:89] found id: "033ef6fd0ada365a2ecc235eed62496fe7b0a609cd2b260dacf36429246eb827"
	I1124 14:19:52.785553  200822 cri.go:89] found id: "838b8a2e9df2c5d45fa6bef18fa814af0df8f6efe64027561859a41453484af0"
	I1124 14:19:52.785556  200822 cri.go:89] found id: "9fae20cc90ea0e80a0e993b46f094ba9011120aee92f560781378c0ce54c97cb"
	I1124 14:19:52.785564  200822 cri.go:89] found id: "1e341c0d44a6f583b56404ba3cbb8e6d190a6bead92ac62577f51ce6b821e1ba"
	I1124 14:19:52.785568  200822 cri.go:89] found id: "4037ee2765bda709b510d5f015e77323b584c6dd7204ad7c638918dcd2628c45"
	I1124 14:19:52.785572  200822 cri.go:89] found id: ""
	I1124 14:19:52.785633  200822 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:19:52.806274  200822 retry.go:31] will retry after 361.596859ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:19:52Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:19:53.168853  200822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:19:53.181893  200822 pause.go:52] kubelet running: false
	I1124 14:19:53.182001  200822 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:19:53.401240  200822 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:19:53.401338  200822 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:19:53.481316  200822 cri.go:89] found id: "7500421b8d518959966543c2fb44123cf1e925d09b9f3a19358de4f5ccaf03f5"
	I1124 14:19:53.481341  200822 cri.go:89] found id: "91d70065b14a06021db8c9a017b68c7833b9f540e25841cd0422a6eac3a15b51"
	I1124 14:19:53.481346  200822 cri.go:89] found id: "b98558ac51e4ca540f3cbbc6b2f05fe5584b13c8eb1c8764289a95ecdde989f6"
	I1124 14:19:53.481357  200822 cri.go:89] found id: "9edacfe959f782420519cd918af883fab72dba651cea4c1003317aa7dbb5aee2"
	I1124 14:19:53.481360  200822 cri.go:89] found id: "1c3203cd2d06f35ce87a959266c2a7517112b74ce7421df704dc7f717e2c1e12"
	I1124 14:19:53.481369  200822 cri.go:89] found id: "6fb3ae76e7269290f5063bc5ecd82c590e464d07c67d0618070feb631692598d"
	I1124 14:19:53.481373  200822 cri.go:89] found id: "033ef6fd0ada365a2ecc235eed62496fe7b0a609cd2b260dacf36429246eb827"
	I1124 14:19:53.481375  200822 cri.go:89] found id: "838b8a2e9df2c5d45fa6bef18fa814af0df8f6efe64027561859a41453484af0"
	I1124 14:19:53.481379  200822 cri.go:89] found id: "9fae20cc90ea0e80a0e993b46f094ba9011120aee92f560781378c0ce54c97cb"
	I1124 14:19:53.481385  200822 cri.go:89] found id: "1e341c0d44a6f583b56404ba3cbb8e6d190a6bead92ac62577f51ce6b821e1ba"
	I1124 14:19:53.481389  200822 cri.go:89] found id: "4037ee2765bda709b510d5f015e77323b584c6dd7204ad7c638918dcd2628c45"
	I1124 14:19:53.481392  200822 cri.go:89] found id: ""
	I1124 14:19:53.481440  200822 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:19:53.494006  200822 retry.go:31] will retry after 247.765471ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:19:53Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:19:53.742494  200822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:19:53.759644  200822 pause.go:52] kubelet running: false
	I1124 14:19:53.759788  200822 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:19:54.070400  200822 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:19:54.070559  200822 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:19:54.176966  200822 cri.go:89] found id: "7500421b8d518959966543c2fb44123cf1e925d09b9f3a19358de4f5ccaf03f5"
	I1124 14:19:54.177035  200822 cri.go:89] found id: "91d70065b14a06021db8c9a017b68c7833b9f540e25841cd0422a6eac3a15b51"
	I1124 14:19:54.177054  200822 cri.go:89] found id: "b98558ac51e4ca540f3cbbc6b2f05fe5584b13c8eb1c8764289a95ecdde989f6"
	I1124 14:19:54.177076  200822 cri.go:89] found id: "9edacfe959f782420519cd918af883fab72dba651cea4c1003317aa7dbb5aee2"
	I1124 14:19:54.177117  200822 cri.go:89] found id: "1c3203cd2d06f35ce87a959266c2a7517112b74ce7421df704dc7f717e2c1e12"
	I1124 14:19:54.177139  200822 cri.go:89] found id: "6fb3ae76e7269290f5063bc5ecd82c590e464d07c67d0618070feb631692598d"
	I1124 14:19:54.177160  200822 cri.go:89] found id: "033ef6fd0ada365a2ecc235eed62496fe7b0a609cd2b260dacf36429246eb827"
	I1124 14:19:54.177197  200822 cri.go:89] found id: "838b8a2e9df2c5d45fa6bef18fa814af0df8f6efe64027561859a41453484af0"
	I1124 14:19:54.177218  200822 cri.go:89] found id: "9fae20cc90ea0e80a0e993b46f094ba9011120aee92f560781378c0ce54c97cb"
	I1124 14:19:54.177239  200822 cri.go:89] found id: "1e341c0d44a6f583b56404ba3cbb8e6d190a6bead92ac62577f51ce6b821e1ba"
	I1124 14:19:54.177258  200822 cri.go:89] found id: "4037ee2765bda709b510d5f015e77323b584c6dd7204ad7c638918dcd2628c45"
	I1124 14:19:54.177293  200822 cri.go:89] found id: ""
	I1124 14:19:54.177381  200822 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:19:54.193740  200822 out.go:203] 
	W1124 14:19:54.196709  200822 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:19:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:19:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 14:19:54.196732  200822 out.go:285] * 
	* 
	W1124 14:19:54.202603  200822 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 14:19:54.205465  200822 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-444317 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-444317
helpers_test.go:243: (dbg) docker inspect no-preload-444317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce",
	        "Created": "2025-11-24T14:17:08.709891648Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196004,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:18:51.022201393Z",
	            "FinishedAt": "2025-11-24T14:18:50.169456401Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/hostname",
	        "HostsPath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/hosts",
	        "LogPath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce-json.log",
	        "Name": "/no-preload-444317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-444317:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-444317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce",
	                "LowerDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-444317",
	                "Source": "/var/lib/docker/volumes/no-preload-444317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-444317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-444317",
	                "name.minikube.sigs.k8s.io": "no-preload-444317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bf99d0dab9c1b251e90cd738e9c0b89b6a83525451730ff8e575e7a1689d4cb9",
	            "SandboxKey": "/var/run/docker/netns/bf99d0dab9c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-444317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:8a:56:48:3f:9e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02f1d732c24a57a6012dfe448076c210da6d01bbcb8679ec8ce3692995d11521",
	                    "EndpointID": "e0dab7e3a24274195aa31d28438fb53bf57c22731fc13e8bca0fbcf1428a37b3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-444317",
	                        "ade20648158a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-444317 -n no-preload-444317
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-444317 -n no-preload-444317: exit status 2 (400.463673ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-444317 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-444317 logs -n 25: (1.737356169s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-097221 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-097221    │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p cert-options-097221                                                                                                                                                                                                                        │ cert-options-097221    │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │                     │
	│ stop    │ -p old-k8s-version-706771 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:16 UTC │
	│ image   │ old-k8s-version-706771 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │ 24 Nov 25 14:16 UTC │
	│ pause   │ -p old-k8s-version-706771 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │                     │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-032076 │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:18 UTC │
	│ delete  │ -p cert-expiration-032076                                                                                                                                                                                                                     │ cert-expiration-032076 │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293     │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-444317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │                     │
	│ stop    │ -p no-preload-444317 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ addons  │ enable dashboard -p no-preload-444317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-720293     │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ stop    │ -p embed-certs-720293 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-720293     │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-720293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-720293     │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293     │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ image   │ no-preload-444317 image list --format=json                                                                                                                                                                                                    │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ pause   │ -p no-preload-444317 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:19:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:19:32.542428  198824 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:19:32.542789  198824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:19:32.542803  198824 out.go:374] Setting ErrFile to fd 2...
	I1124 14:19:32.542810  198824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:19:32.543105  198824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:19:32.543572  198824 out.go:368] Setting JSON to false
	I1124 14:19:32.544969  198824 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7324,"bootTime":1763986649,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:19:32.545048  198824 start.go:143] virtualization:  
	I1124 14:19:32.548403  198824 out.go:179] * [embed-certs-720293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:19:32.552315  198824 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:19:32.552432  198824 notify.go:221] Checking for updates...
	I1124 14:19:32.558538  198824 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:19:32.561614  198824 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:19:32.564572  198824 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:19:32.567744  198824 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:19:32.570620  198824 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:19:32.574068  198824 config.go:182] Loaded profile config "embed-certs-720293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:19:32.574620  198824 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:19:32.600910  198824 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:19:32.601026  198824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:19:32.665060  198824 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:19:32.65533038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:19:32.665267  198824 docker.go:319] overlay module found
	I1124 14:19:32.670237  198824 out.go:179] * Using the docker driver based on existing profile
	I1124 14:19:32.672984  198824 start.go:309] selected driver: docker
	I1124 14:19:32.673002  198824 start.go:927] validating driver "docker" against &{Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:19:32.673100  198824 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:19:32.673896  198824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:19:32.730978  198824 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:19:32.721300318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:19:32.731326  198824 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:19:32.731400  198824 cni.go:84] Creating CNI manager for ""
	I1124 14:19:32.731458  198824 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:19:32.731500  198824 start.go:353] cluster config:
	{Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:19:32.736736  198824 out.go:179] * Starting "embed-certs-720293" primary control-plane node in "embed-certs-720293" cluster
	I1124 14:19:32.739533  198824 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:19:32.742433  198824 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:19:32.745317  198824 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:19:32.745403  198824 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:19:32.745448  198824 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:19:32.745674  198824 cache.go:65] Caching tarball of preloaded images
	I1124 14:19:32.745772  198824 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:19:32.745784  198824 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:19:32.745922  198824 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/config.json ...
	I1124 14:19:32.771887  198824 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:19:32.771911  198824 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:19:32.771932  198824 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:19:32.771962  198824 start.go:360] acquireMachinesLock for embed-certs-720293: {Name:mk63d8a86030ce5af3799b85ca4bd5722aa0f10b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:19:32.772027  198824 start.go:364] duration metric: took 43.11µs to acquireMachinesLock for "embed-certs-720293"
	I1124 14:19:32.772049  198824 start.go:96] Skipping create...Using existing machine configuration
	I1124 14:19:32.772058  198824 fix.go:54] fixHost starting: 
	I1124 14:19:32.772311  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:32.790150  198824 fix.go:112] recreateIfNeeded on embed-certs-720293: state=Stopped err=<nil>
	W1124 14:19:32.790179  198824 fix.go:138] unexpected machine state, will restart: <nil>
	W1124 14:19:31.567589  195877 pod_ready.go:104] pod "coredns-66bc5c9577-lrh58" is not "Ready", error: <nil>
	W1124 14:19:33.571844  195877 pod_ready.go:104] pod "coredns-66bc5c9577-lrh58" is not "Ready", error: <nil>
	I1124 14:19:32.793455  198824 out.go:252] * Restarting existing docker container for "embed-certs-720293" ...
	I1124 14:19:32.793556  198824 cli_runner.go:164] Run: docker start embed-certs-720293
	I1124 14:19:33.072202  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:33.092725  198824 kic.go:430] container "embed-certs-720293" state is running.
	I1124 14:19:33.093184  198824 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-720293
	I1124 14:19:33.115490  198824 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/config.json ...
	I1124 14:19:33.115706  198824 machine.go:94] provisionDockerMachine start ...
	I1124 14:19:33.115764  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:33.144021  198824 main.go:143] libmachine: Using SSH client type: native
	I1124 14:19:33.144485  198824 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1124 14:19:33.144499  198824 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:19:33.145076  198824 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49890->127.0.0.1:33073: read: connection reset by peer
	I1124 14:19:36.299179  198824 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-720293
	
	I1124 14:19:36.299203  198824 ubuntu.go:182] provisioning hostname "embed-certs-720293"
	I1124 14:19:36.299265  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:36.321875  198824 main.go:143] libmachine: Using SSH client type: native
	I1124 14:19:36.322195  198824 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1124 14:19:36.322212  198824 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-720293 && echo "embed-certs-720293" | sudo tee /etc/hostname
	I1124 14:19:36.502653  198824 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-720293
	
	I1124 14:19:36.502770  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:36.525398  198824 main.go:143] libmachine: Using SSH client type: native
	I1124 14:19:36.525715  198824 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1124 14:19:36.525737  198824 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-720293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-720293/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-720293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:19:36.679526  198824 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:19:36.679564  198824 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:19:36.679609  198824 ubuntu.go:190] setting up certificates
	I1124 14:19:36.679620  198824 provision.go:84] configureAuth start
	I1124 14:19:36.679693  198824 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-720293
	I1124 14:19:36.697153  198824 provision.go:143] copyHostCerts
	I1124 14:19:36.697225  198824 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:19:36.697244  198824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:19:36.697320  198824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:19:36.697483  198824 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:19:36.697496  198824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:19:36.697532  198824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:19:36.697610  198824 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:19:36.697621  198824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:19:36.697648  198824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:19:36.697712  198824 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.embed-certs-720293 san=[127.0.0.1 192.168.85.2 embed-certs-720293 localhost minikube]
	I1124 14:19:36.961870  198824 provision.go:177] copyRemoteCerts
	I1124 14:19:36.961939  198824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:19:36.961983  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:36.982373  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:37.095183  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:19:37.114029  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 14:19:37.135747  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:19:37.154343  198824 provision.go:87] duration metric: took 474.697114ms to configureAuth
	I1124 14:19:37.154372  198824 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:19:37.154594  198824 config.go:182] Loaded profile config "embed-certs-720293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:19:37.154716  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:37.172876  198824 main.go:143] libmachine: Using SSH client type: native
	I1124 14:19:37.173189  198824 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1124 14:19:37.173208  198824 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1124 14:19:36.068284  195877 pod_ready.go:104] pod "coredns-66bc5c9577-lrh58" is not "Ready", error: <nil>
	I1124 14:19:37.567615  195877 pod_ready.go:94] pod "coredns-66bc5c9577-lrh58" is "Ready"
	I1124 14:19:37.567641  195877 pod_ready.go:86] duration metric: took 31.005915484s for pod "coredns-66bc5c9577-lrh58" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.570257  195877 pod_ready.go:83] waiting for pod "etcd-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.574549  195877 pod_ready.go:94] pod "etcd-no-preload-444317" is "Ready"
	I1124 14:19:37.574626  195877 pod_ready.go:86] duration metric: took 4.347385ms for pod "etcd-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.577335  195877 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.582417  195877 pod_ready.go:94] pod "kube-apiserver-no-preload-444317" is "Ready"
	I1124 14:19:37.582439  195877 pod_ready.go:86] duration metric: took 5.081047ms for pod "kube-apiserver-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.584983  195877 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.766306  195877 pod_ready.go:94] pod "kube-controller-manager-no-preload-444317" is "Ready"
	I1124 14:19:37.766330  195877 pod_ready.go:86] duration metric: took 181.326236ms for pod "kube-controller-manager-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.966603  195877 pod_ready.go:83] waiting for pod "kube-proxy-m4fb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:38.366582  195877 pod_ready.go:94] pod "kube-proxy-m4fb4" is "Ready"
	I1124 14:19:38.366610  195877 pod_ready.go:86] duration metric: took 399.984444ms for pod "kube-proxy-m4fb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:38.567099  195877 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:38.966162  195877 pod_ready.go:94] pod "kube-scheduler-no-preload-444317" is "Ready"
	I1124 14:19:38.966188  195877 pod_ready.go:86] duration metric: took 399.061177ms for pod "kube-scheduler-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:38.966201  195877 pod_ready.go:40] duration metric: took 32.410793769s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:19:39.044825  195877 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:19:39.049783  195877 out.go:179] * Done! kubectl is now configured to use "no-preload-444317" cluster and "default" namespace by default
	I1124 14:19:37.589217  198824 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:19:37.589280  198824 machine.go:97] duration metric: took 4.473564566s to provisionDockerMachine
	I1124 14:19:37.589309  198824 start.go:293] postStartSetup for "embed-certs-720293" (driver="docker")
	I1124 14:19:37.589365  198824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:19:37.589472  198824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:19:37.589540  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:37.613741  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:37.719527  198824 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:19:37.722736  198824 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:19:37.722762  198824 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:19:37.722773  198824 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:19:37.722827  198824 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:19:37.722911  198824 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:19:37.723014  198824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:19:37.730458  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:19:37.747764  198824 start.go:296] duration metric: took 158.423791ms for postStartSetup
	I1124 14:19:37.747843  198824 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:19:37.747899  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:37.764668  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:37.872442  198824 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:19:37.877270  198824 fix.go:56] duration metric: took 5.105204695s for fixHost
	I1124 14:19:37.877294  198824 start.go:83] releasing machines lock for "embed-certs-720293", held for 5.10525464s
	I1124 14:19:37.877363  198824 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-720293
	I1124 14:19:37.895319  198824 ssh_runner.go:195] Run: cat /version.json
	I1124 14:19:37.895463  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:37.895736  198824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:19:37.895819  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:37.917164  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:37.929188  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:38.028050  198824 ssh_runner.go:195] Run: systemctl --version
	I1124 14:19:38.130840  198824 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:19:38.178795  198824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:19:38.183595  198824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:19:38.183677  198824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:19:38.191529  198824 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:19:38.191555  198824 start.go:496] detecting cgroup driver to use...
	I1124 14:19:38.191586  198824 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:19:38.191646  198824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:19:38.209318  198824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:19:38.222714  198824 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:19:38.222784  198824 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:19:38.238683  198824 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:19:38.252044  198824 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:19:38.373600  198824 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:19:38.496108  198824 docker.go:234] disabling docker service ...
	I1124 14:19:38.496173  198824 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:19:38.511383  198824 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:19:38.525130  198824 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:19:38.660445  198824 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:19:38.774276  198824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:19:38.786884  198824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:19:38.801457  198824 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:19:38.801568  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.811459  198824 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:19:38.811556  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.821978  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.832285  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.841955  198824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:19:38.850985  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.861973  198824 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.871807  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.880613  198824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:19:38.888526  198824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:19:38.896245  198824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:19:39.016811  198824 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:19:39.236589  198824 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:19:39.236668  198824 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:19:39.240980  198824 start.go:564] Will wait 60s for crictl version
	I1124 14:19:39.241046  198824 ssh_runner.go:195] Run: which crictl
	I1124 14:19:39.245598  198824 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:19:39.289323  198824 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:19:39.289409  198824 ssh_runner.go:195] Run: crio --version
	I1124 14:19:39.334030  198824 ssh_runner.go:195] Run: crio --version
	I1124 14:19:39.390321  198824 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:19:39.393337  198824 cli_runner.go:164] Run: docker network inspect embed-certs-720293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:19:39.430091  198824 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:19:39.434519  198824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:19:39.449809  198824 kubeadm.go:884] updating cluster {Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:19:39.449948  198824 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:19:39.450004  198824 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:19:39.501878  198824 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:19:39.501899  198824 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:19:39.501956  198824 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:19:39.542244  198824 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:19:39.542265  198824 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:19:39.542277  198824 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 14:19:39.542376  198824 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-720293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:19:39.542455  198824 ssh_runner.go:195] Run: crio config
	I1124 14:19:39.619095  198824 cni.go:84] Creating CNI manager for ""
	I1124 14:19:39.619126  198824 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:19:39.619170  198824 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:19:39.619221  198824 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-720293 NodeName:embed-certs-720293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:19:39.619498  198824 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-720293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:19:39.619601  198824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:19:39.627783  198824 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:19:39.627864  198824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:19:39.635712  198824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 14:19:39.649324  198824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:19:39.661654  198824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1124 14:19:39.675333  198824 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:19:39.679084  198824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:19:39.689419  198824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:19:39.807314  198824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:19:39.823834  198824 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293 for IP: 192.168.85.2
	I1124 14:19:39.823853  198824 certs.go:195] generating shared ca certs ...
	I1124 14:19:39.823870  198824 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:19:39.824011  198824 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:19:39.824054  198824 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:19:39.824061  198824 certs.go:257] generating profile certs ...
	I1124 14:19:39.824153  198824 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/client.key
	I1124 14:19:39.824216  198824 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.key.8c3742eb
	I1124 14:19:39.824262  198824 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.key
	I1124 14:19:39.824364  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:19:39.824397  198824 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:19:39.824405  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:19:39.824432  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:19:39.824455  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:19:39.824477  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:19:39.824539  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:19:39.825139  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:19:39.846635  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:19:39.869884  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:19:39.892098  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:19:39.914646  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 14:19:39.943953  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:19:39.964855  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:19:39.986206  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:19:40.021362  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:19:40.046247  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:19:40.068007  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:19:40.096461  198824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:19:40.110357  198824 ssh_runner.go:195] Run: openssl version
	I1124 14:19:40.119251  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:19:40.129450  198824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:19:40.134614  198824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:19:40.134724  198824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:19:40.177363  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:19:40.185991  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:19:40.195733  198824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:19:40.199326  198824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:19:40.199527  198824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:19:40.240356  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:19:40.248199  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:19:40.256359  198824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:19:40.259879  198824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:19:40.259959  198824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:19:40.300877  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:19:40.308667  198824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:19:40.313459  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:19:40.355318  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:19:40.397024  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:19:40.449225  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:19:40.495179  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:19:40.549766  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:19:40.613439  198824 kubeadm.go:401] StartCluster: {Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:19:40.613585  198824 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:19:40.613666  198824 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:19:40.710462  198824 cri.go:89] found id: "7a20914603732648c5d9ff34200e808b2002ae00dc4000fe37adb370011a3888"
	I1124 14:19:40.710542  198824 cri.go:89] found id: "7634741324dd1d91cc93df52ab62f4e54882e2826f3185dee5ff5c38bdffd3cf"
	I1124 14:19:40.710602  198824 cri.go:89] found id: "43d901c75e4d3ea7cfdd826b2f38e870e2be39de21570400fd187f7a2239344b"
	I1124 14:19:40.710638  198824 cri.go:89] found id: "da3f0798a706df28b161fc15c24ff964503411fba4af93d09ab0786003dc32ea"
	I1124 14:19:40.710677  198824 cri.go:89] found id: ""
	I1124 14:19:40.710748  198824 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 14:19:40.728643  198824 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:19:40Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:19:40.728775  198824 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:19:40.742860  198824 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:19:40.742927  198824 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:19:40.743005  198824 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:19:40.751795  198824 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:19:40.752384  198824 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-720293" does not appear in /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:19:40.752694  198824 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-2805/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-720293" cluster setting kubeconfig missing "embed-certs-720293" context setting]
	I1124 14:19:40.753172  198824 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:19:40.754735  198824 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:19:40.767370  198824 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 14:19:40.767437  198824 kubeadm.go:602] duration metric: took 24.491254ms to restartPrimaryControlPlane
	I1124 14:19:40.767461  198824 kubeadm.go:403] duration metric: took 154.032236ms to StartCluster
	I1124 14:19:40.767489  198824 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:19:40.767566  198824 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:19:40.768802  198824 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:19:40.769041  198824 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:19:40.769377  198824 config.go:182] Loaded profile config "embed-certs-720293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:19:40.769534  198824 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:19:40.769609  198824 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-720293"
	I1124 14:19:40.769628  198824 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-720293"
	W1124 14:19:40.769634  198824 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:19:40.769656  198824 host.go:66] Checking if "embed-certs-720293" exists ...
	I1124 14:19:40.769723  198824 addons.go:70] Setting dashboard=true in profile "embed-certs-720293"
	I1124 14:19:40.769738  198824 addons.go:239] Setting addon dashboard=true in "embed-certs-720293"
	W1124 14:19:40.769745  198824 addons.go:248] addon dashboard should already be in state true
	I1124 14:19:40.769779  198824 host.go:66] Checking if "embed-certs-720293" exists ...
	I1124 14:19:40.770185  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:40.770293  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:40.770701  198824 addons.go:70] Setting default-storageclass=true in profile "embed-certs-720293"
	I1124 14:19:40.770746  198824 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-720293"
	I1124 14:19:40.771015  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:40.773161  198824 out.go:179] * Verifying Kubernetes components...
	I1124 14:19:40.779345  198824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:19:40.821065  198824 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 14:19:40.825657  198824 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:19:40.828583  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 14:19:40.828611  198824 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 14:19:40.828680  198824 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:19:40.828682  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:40.834221  198824 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:19:40.834243  198824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:19:40.834318  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:40.839295  198824 addons.go:239] Setting addon default-storageclass=true in "embed-certs-720293"
	W1124 14:19:40.839316  198824 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:19:40.839341  198824 host.go:66] Checking if "embed-certs-720293" exists ...
	I1124 14:19:40.839898  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:40.892711  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:40.895419  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:40.899567  198824 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:19:40.899585  198824 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:19:40.899646  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:40.928769  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:41.099533  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 14:19:41.099568  198824 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 14:19:41.153214  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 14:19:41.153283  198824 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 14:19:41.168033  198824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:19:41.178627  198824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:19:41.208950  198824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:19:41.220419  198824 node_ready.go:35] waiting up to 6m0s for node "embed-certs-720293" to be "Ready" ...
	I1124 14:19:41.252527  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 14:19:41.252595  198824 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 14:19:41.328802  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 14:19:41.328871  198824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 14:19:41.393726  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 14:19:41.393791  198824 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 14:19:41.455744  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 14:19:41.455808  198824 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 14:19:41.535649  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 14:19:41.535678  198824 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 14:19:41.556977  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 14:19:41.557002  198824 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 14:19:41.579295  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:19:41.579323  198824 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 14:19:41.601426  198824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:19:45.514808  198824 node_ready.go:49] node "embed-certs-720293" is "Ready"
	I1124 14:19:45.514842  198824 node_ready.go:38] duration metric: took 4.29429741s for node "embed-certs-720293" to be "Ready" ...
	I1124 14:19:45.514857  198824 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:19:45.514913  198824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:19:45.699514  198824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.520804756s)
	I1124 14:19:47.104229  198824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.895198427s)
	I1124 14:19:47.104342  198824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.50288546s)
	I1124 14:19:47.104470  198824 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.589541456s)
	I1124 14:19:47.104484  198824 api_server.go:72] duration metric: took 6.33539077s to wait for apiserver process to appear ...
	I1124 14:19:47.104490  198824 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:19:47.104506  198824 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:19:47.107273  198824 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-720293 addons enable metrics-server
	
	I1124 14:19:47.110329  198824 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1124 14:19:47.113221  198824 addons.go:530] duration metric: took 6.343687145s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1124 14:19:47.115764  198824 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:19:47.115787  198824 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:19:47.605596  198824 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:19:47.618056  198824 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 14:19:47.619385  198824 api_server.go:141] control plane version: v1.34.1
	I1124 14:19:47.619411  198824 api_server.go:131] duration metric: took 514.915785ms to wait for apiserver health ...
	I1124 14:19:47.619422  198824 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:19:47.624624  198824 system_pods.go:59] 8 kube-system pods found
	I1124 14:19:47.624668  198824 system_pods.go:61] "coredns-66bc5c9577-6nztq" [9fbc8e0e-67a3-4086-aa1d-29b18f0c8d19] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:19:47.624678  198824 system_pods.go:61] "etcd-embed-certs-720293" [bc16ed26-fa6f-4c97-836c-c9f0b7f731aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:19:47.624683  198824 system_pods.go:61] "kindnet-ft88w" [7966e19b-c109-4372-8b9d-53d6f04dd7e7] Running
	I1124 14:19:47.624690  198824 system_pods.go:61] "kube-apiserver-embed-certs-720293" [8cdcb85e-986c-4ce2-b890-a8d96ea344c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:19:47.624696  198824 system_pods.go:61] "kube-controller-manager-embed-certs-720293" [9e5790d2-8178-4215-9b38-ffedd4359966] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:19:47.624701  198824 system_pods.go:61] "kube-proxy-pwpl4" [9404897b-5bae-4f03-987a-01e4ec7795a9] Running
	I1124 14:19:47.624707  198824 system_pods.go:61] "kube-scheduler-embed-certs-720293" [a289a525-28c8-45a8-a4e4-dde78e1ef777] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:19:47.624710  198824 system_pods.go:61] "storage-provisioner" [f6c6574d-20bd-49e9-86e6-b0d81b3490c6] Running
	I1124 14:19:47.624716  198824 system_pods.go:74] duration metric: took 5.289451ms to wait for pod list to return data ...
	I1124 14:19:47.624733  198824 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:19:47.627388  198824 default_sa.go:45] found service account: "default"
	I1124 14:19:47.627414  198824 default_sa.go:55] duration metric: took 2.674572ms for default service account to be created ...
	I1124 14:19:47.627423  198824 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:19:47.631043  198824 system_pods.go:86] 8 kube-system pods found
	I1124 14:19:47.631077  198824 system_pods.go:89] "coredns-66bc5c9577-6nztq" [9fbc8e0e-67a3-4086-aa1d-29b18f0c8d19] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:19:47.631087  198824 system_pods.go:89] "etcd-embed-certs-720293" [bc16ed26-fa6f-4c97-836c-c9f0b7f731aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:19:47.631092  198824 system_pods.go:89] "kindnet-ft88w" [7966e19b-c109-4372-8b9d-53d6f04dd7e7] Running
	I1124 14:19:47.631099  198824 system_pods.go:89] "kube-apiserver-embed-certs-720293" [8cdcb85e-986c-4ce2-b890-a8d96ea344c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:19:47.631106  198824 system_pods.go:89] "kube-controller-manager-embed-certs-720293" [9e5790d2-8178-4215-9b38-ffedd4359966] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:19:47.631110  198824 system_pods.go:89] "kube-proxy-pwpl4" [9404897b-5bae-4f03-987a-01e4ec7795a9] Running
	I1124 14:19:47.631117  198824 system_pods.go:89] "kube-scheduler-embed-certs-720293" [a289a525-28c8-45a8-a4e4-dde78e1ef777] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:19:47.631121  198824 system_pods.go:89] "storage-provisioner" [f6c6574d-20bd-49e9-86e6-b0d81b3490c6] Running
	I1124 14:19:47.631130  198824 system_pods.go:126] duration metric: took 3.70047ms to wait for k8s-apps to be running ...
	I1124 14:19:47.631145  198824 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:19:47.631203  198824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:19:47.645996  198824 system_svc.go:56] duration metric: took 14.842602ms WaitForService to wait for kubelet
	I1124 14:19:47.646036  198824 kubeadm.go:587] duration metric: took 6.876931008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:19:47.646057  198824 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:19:47.651443  198824 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:19:47.651475  198824 node_conditions.go:123] node cpu capacity is 2
	I1124 14:19:47.651488  198824 node_conditions.go:105] duration metric: took 5.425567ms to run NodePressure ...
	I1124 14:19:47.651501  198824 start.go:242] waiting for startup goroutines ...
	I1124 14:19:47.651509  198824 start.go:247] waiting for cluster config update ...
	I1124 14:19:47.651520  198824 start.go:256] writing updated cluster config ...
	I1124 14:19:47.651817  198824 ssh_runner.go:195] Run: rm -f paused
	I1124 14:19:47.656578  198824 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:19:47.660860  198824 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6nztq" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 14:19:49.666715  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:19:51.672376  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.022562792Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.027247117Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.027432136Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.027515641Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.032786072Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.032949725Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.03302591Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.037912567Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.037948022Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.037974508Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.045855576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.046030306Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.115015623Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6a29eab8-f7da-41e6-87cf-18ba5ac4b804 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.115880502Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6ca4d172-f4e0-46ed-81a8-4661cd7a8c74 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.117778852Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl/dashboard-metrics-scraper" id=2baac85b-afbb-4362-a572-4029406575fa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.11803428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.187952327Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.189651955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.258999387Z" level=info msg="Created container 1e341c0d44a6f583b56404ba3cbb8e6d190a6bead92ac62577f51ce6b821e1ba: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl/dashboard-metrics-scraper" id=2baac85b-afbb-4362-a572-4029406575fa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.264346939Z" level=info msg="Starting container: 1e341c0d44a6f583b56404ba3cbb8e6d190a6bead92ac62577f51ce6b821e1ba" id=d2fe1369-5cb6-4caa-9a6e-a9d18a47b1d8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:19:52 no-preload-444317 conmon[1727]: conmon 1e341c0d44a6f583b564 <ninfo>: container 1729 exited with status 1
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.277072185Z" level=info msg="Started container" PID=1729 containerID=1e341c0d44a6f583b56404ba3cbb8e6d190a6bead92ac62577f51ce6b821e1ba description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl/dashboard-metrics-scraper id=d2fe1369-5cb6-4caa-9a6e-a9d18a47b1d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f13c1700211ca9495f08248319168f073ce0892cf683f5035d6d89944e19be74
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.424424387Z" level=info msg="Removing container: 867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1" id=7dbb0103-93a3-4f31-8958-1fd933befae9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.439419458Z" level=info msg="Error loading conmon cgroup of container 867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1: cgroup deleted" id=7dbb0103-93a3-4f31-8958-1fd933befae9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.446722354Z" level=info msg="Removed container 867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl/dashboard-metrics-scraper" id=7dbb0103-93a3-4f31-8958-1fd933befae9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1e341c0d44a6f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago       Exited              dashboard-metrics-scraper   3                   f13c1700211ca       dashboard-metrics-scraper-6ffb444bf9-8pxxl   kubernetes-dashboard
	7500421b8d518       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           19 seconds ago      Running             storage-provisioner         2                   e618c6f2ab6b3       storage-provisioner                          kube-system
	4037ee2765bda       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   aab56f53d3211       kubernetes-dashboard-855c9754f9-xdmjc        kubernetes-dashboard
	91d70065b14a0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago      Running             coredns                     1                   f1e2c0d4fe590       coredns-66bc5c9577-lrh58                     kube-system
	8eea84e17d64e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   0dcbb17a347a6       busybox                                      default
	b98558ac51e4c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   ccfb13bd42e1a       kube-proxy-m4fb4                             kube-system
	9edacfe959f78       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           49 seconds ago      Exited              storage-provisioner         1                   e618c6f2ab6b3       storage-provisioner                          kube-system
	1c3203cd2d06f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   b4346711e9408       kindnet-zwxh6                                kube-system
	6fb3ae76e7269       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           56 seconds ago      Running             kube-scheduler              1                   85c8d5d569f2d       kube-scheduler-no-preload-444317             kube-system
	033ef6fd0ada3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           56 seconds ago      Running             etcd                        1                   ea7a96cb69826       etcd-no-preload-444317                       kube-system
	838b8a2e9df2c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           56 seconds ago      Running             kube-apiserver              1                   ad22c589251fb       kube-apiserver-no-preload-444317             kube-system
	9fae20cc90ea0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           56 seconds ago      Running             kube-controller-manager     1                   012905bf7e225       kube-controller-manager-no-preload-444317    kube-system
	
	
	==> coredns [91d70065b14a06021db8c9a017b68c7833b9f540e25841cd0422a6eac3a15b51] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33049 - 6681 "HINFO IN 6435730273369765947.1452196705638174724. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024408072s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-444317
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-444317
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=no-preload-444317
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_18_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-444317
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:19:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:19:35 +0000   Mon, 24 Nov 2025 14:17:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:19:35 +0000   Mon, 24 Nov 2025 14:17:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:19:35 +0000   Mon, 24 Nov 2025 14:17:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:19:35 +0000   Mon, 24 Nov 2025 14:18:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-444317
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                7f3cb54f-ba1b-4064-b92e-1b7768ad96c4
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-lrh58                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-no-preload-444317                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-zwxh6                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-444317              250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-444317     200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-m4fb4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-444317              100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8pxxl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xdmjc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 108s                 kube-proxy       
	  Normal   Starting                 49s                  kube-proxy       
	  Warning  CgroupV1                 2m7s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node no-preload-444317 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node no-preload-444317 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node no-preload-444317 status is now: NodeHasSufficientPID
	  Normal   Starting                 115s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 115s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  114s                 kubelet          Node no-preload-444317 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    114s                 kubelet          Node no-preload-444317 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     114s                 kubelet          Node no-preload-444317 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           111s                 node-controller  Node no-preload-444317 event: Registered Node no-preload-444317 in Controller
	  Normal   NodeReady                94s                  kubelet          Node no-preload-444317 status is now: NodeReady
	  Normal   Starting                 57s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 57s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node no-preload-444317 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node no-preload-444317 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node no-preload-444317 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                  node-controller  Node no-preload-444317 event: Registered Node no-preload-444317 in Controller
	
	
	==> dmesg <==
	[Nov24 13:54] overlayfs: idmapped layers are currently not supported
	[Nov24 13:56] overlayfs: idmapped layers are currently not supported
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	[ +49.916713] overlayfs: idmapped layers are currently not supported
	[Nov24 14:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [033ef6fd0ada365a2ecc235eed62496fe7b0a609cd2b260dacf36429246eb827] <==
	{"level":"warn","ts":"2025-11-24T14:19:02.915818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:02.944459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:02.954050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:02.968908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:02.985985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.005020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.020072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.035104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.059193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.085552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.090690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.120684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.132581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.147513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.163412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.179063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.198124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.215905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.240160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.259154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.294703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.314375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.336102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.348140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.406933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57866","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:19:55 up  2:02,  0 user,  load average: 2.32, 2.82, 2.47
	Linux no-preload-444317 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1c3203cd2d06f35ce87a959266c2a7517112b74ce7421df704dc7f717e2c1e12] <==
	I1124 14:19:05.818985       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:19:05.819209       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:19:05.819326       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:19:05.819338       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:19:05.819348       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:19:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:19:06.017255       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:19:06.017347       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:19:06.017381       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:19:06.020412       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:19:36.018161       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:19:36.020617       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:19:36.020624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:19:36.020718       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:19:37.219883       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:19:37.219916       1 metrics.go:72] Registering metrics
	I1124 14:19:37.219976       1 controller.go:711] "Syncing nftables rules"
	I1124 14:19:46.022237       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:19:46.022282       1 main.go:301] handling current node
	
	
	==> kube-apiserver [838b8a2e9df2c5d45fa6bef18fa814af0df8f6efe64027561859a41453484af0] <==
	I1124 14:19:04.594753       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 14:19:04.597124       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 14:19:04.597224       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:19:04.597267       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:19:04.599441       1 aggregator.go:171] initial CRD sync complete...
	I1124 14:19:04.599451       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 14:19:04.599457       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:19:04.599463       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:19:04.600963       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:19:04.601190       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 14:19:04.601240       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 14:19:04.608507       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:19:04.612660       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1124 14:19:04.613592       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:19:05.008686       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:19:05.101376       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:19:05.181395       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:19:05.189560       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:19:05.352764       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:19:05.420052       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:19:05.790087       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.252.211"}
	I1124 14:19:05.865762       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.148.158"}
	I1124 14:19:07.818065       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:19:08.165475       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:19:08.275313       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9fae20cc90ea0e80a0e993b46f094ba9011120aee92f560781378c0ce54c97cb] <==
	I1124 14:19:07.838608       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-444317"
	I1124 14:19:07.838666       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:19:07.838676       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:19:07.838764       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 14:19:07.840831       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:19:07.844998       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 14:19:07.845592       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:19:07.847421       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 14:19:07.856098       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:19:07.857016       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:19:07.857577       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:19:07.857779       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:19:07.858136       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 14:19:07.858329       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:19:07.858398       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 14:19:07.858459       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 14:19:07.858516       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 14:19:07.858573       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:19:07.858800       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 14:19:07.864031       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:19:07.864133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 14:19:07.874788       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 14:19:07.884895       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:19:07.884982       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:19:07.885014       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [b98558ac51e4ca540f3cbbc6b2f05fe5584b13c8eb1c8764289a95ecdde989f6] <==
	I1124 14:19:06.005800       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:19:06.128127       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:19:06.238574       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:19:06.238751       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:19:06.238915       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:19:06.309777       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:19:06.309837       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:19:06.320782       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:19:06.321149       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:19:06.321171       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:19:06.331875       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:19:06.331896       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:19:06.332182       1 config.go:200] "Starting service config controller"
	I1124 14:19:06.332197       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:19:06.332480       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:19:06.332495       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:19:06.332908       1 config.go:309] "Starting node config controller"
	I1124 14:19:06.332923       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:19:06.332930       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:19:06.432577       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:19:06.432670       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:19:06.432701       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6fb3ae76e7269290f5063bc5ecd82c590e464d07c67d0618070feb631692598d] <==
	W1124 14:19:04.049306       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 14:19:04.455231       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:19:04.455269       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:19:04.469716       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:19:04.470725       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:19:04.470759       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:19:04.470784       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 14:19:04.547679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:19:04.547918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:19:04.547965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:19:04.548255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:19:04.548326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 14:19:04.548399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:19:04.548026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:19:04.548542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 14:19:04.548614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:19:04.548664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 14:19:04.563320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:19:04.563555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 14:19:04.563681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 14:19:04.564205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 14:19:04.564280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:19:04.564608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 14:19:04.564684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1124 14:19:04.571293       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:19:08 no-preload-444317 kubelet[780]: I1124 14:19:08.420422     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0ca9e6a2-e143-4b08-bfaf-a541eb0f842b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-xdmjc\" (UID: \"0ca9e6a2-e143-4b08-bfaf-a541eb0f842b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xdmjc"
	Nov 24 14:19:08 no-preload-444317 kubelet[780]: I1124 14:19:08.420641     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddrnh\" (UniqueName: \"kubernetes.io/projected/0ca9e6a2-e143-4b08-bfaf-a541eb0f842b-kube-api-access-ddrnh\") pod \"kubernetes-dashboard-855c9754f9-xdmjc\" (UID: \"0ca9e6a2-e143-4b08-bfaf-a541eb0f842b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xdmjc"
	Nov 24 14:19:08 no-preload-444317 kubelet[780]: W1124 14:19:08.605402     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/crio-aab56f53d3211a2d7015f2be240a988cd7cbcbe144d520e77aaab3f135c01e3e WatchSource:0}: Error finding container aab56f53d3211a2d7015f2be240a988cd7cbcbe144d520e77aaab3f135c01e3e: Status 404 returned error can't find the container with id aab56f53d3211a2d7015f2be240a988cd7cbcbe144d520e77aaab3f135c01e3e
	Nov 24 14:19:08 no-preload-444317 kubelet[780]: W1124 14:19:08.607180     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/crio-f13c1700211ca9495f08248319168f073ce0892cf683f5035d6d89944e19be74 WatchSource:0}: Error finding container f13c1700211ca9495f08248319168f073ce0892cf683f5035d6d89944e19be74: Status 404 returned error can't find the container with id f13c1700211ca9495f08248319168f073ce0892cf683f5035d6d89944e19be74
	Nov 24 14:19:13 no-preload-444317 kubelet[780]: I1124 14:19:13.290866     780 scope.go:117] "RemoveContainer" containerID="fc15f4ee1ed99d843cb406e8a62c8ea2e5ba6ff8d1ff1051d38cd97570ed9ad2"
	Nov 24 14:19:14 no-preload-444317 kubelet[780]: I1124 14:19:14.294718     780 scope.go:117] "RemoveContainer" containerID="fc15f4ee1ed99d843cb406e8a62c8ea2e5ba6ff8d1ff1051d38cd97570ed9ad2"
	Nov 24 14:19:14 no-preload-444317 kubelet[780]: I1124 14:19:14.295592     780 scope.go:117] "RemoveContainer" containerID="39f55880affebdac3a628db015dc76fbc8a3e718929b1fa8903ab4a79db2bca4"
	Nov 24 14:19:14 no-preload-444317 kubelet[780]: E1124 14:19:14.299592     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8pxxl_kubernetes-dashboard(a2981b6c-eb71-4d4c-b08d-ba24f8243546)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl" podUID="a2981b6c-eb71-4d4c-b08d-ba24f8243546"
	Nov 24 14:19:15 no-preload-444317 kubelet[780]: I1124 14:19:15.298715     780 scope.go:117] "RemoveContainer" containerID="39f55880affebdac3a628db015dc76fbc8a3e718929b1fa8903ab4a79db2bca4"
	Nov 24 14:19:15 no-preload-444317 kubelet[780]: E1124 14:19:15.298861     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8pxxl_kubernetes-dashboard(a2981b6c-eb71-4d4c-b08d-ba24f8243546)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl" podUID="a2981b6c-eb71-4d4c-b08d-ba24f8243546"
	Nov 24 14:19:18 no-preload-444317 kubelet[780]: I1124 14:19:18.560156     780 scope.go:117] "RemoveContainer" containerID="39f55880affebdac3a628db015dc76fbc8a3e718929b1fa8903ab4a79db2bca4"
	Nov 24 14:19:18 no-preload-444317 kubelet[780]: E1124 14:19:18.560897     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8pxxl_kubernetes-dashboard(a2981b6c-eb71-4d4c-b08d-ba24f8243546)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl" podUID="a2981b6c-eb71-4d4c-b08d-ba24f8243546"
	Nov 24 14:19:29 no-preload-444317 kubelet[780]: I1124 14:19:29.114668     780 scope.go:117] "RemoveContainer" containerID="39f55880affebdac3a628db015dc76fbc8a3e718929b1fa8903ab4a79db2bca4"
	Nov 24 14:19:29 no-preload-444317 kubelet[780]: I1124 14:19:29.350139     780 scope.go:117] "RemoveContainer" containerID="39f55880affebdac3a628db015dc76fbc8a3e718929b1fa8903ab4a79db2bca4"
	Nov 24 14:19:29 no-preload-444317 kubelet[780]: I1124 14:19:29.350499     780 scope.go:117] "RemoveContainer" containerID="867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1"
	Nov 24 14:19:29 no-preload-444317 kubelet[780]: E1124 14:19:29.350716     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8pxxl_kubernetes-dashboard(a2981b6c-eb71-4d4c-b08d-ba24f8243546)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl" podUID="a2981b6c-eb71-4d4c-b08d-ba24f8243546"
	Nov 24 14:19:29 no-preload-444317 kubelet[780]: I1124 14:19:29.370499     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xdmjc" podStartSLOduration=11.884572263 podStartE2EDuration="21.370482081s" podCreationTimestamp="2025-11-24 14:19:08 +0000 UTC" firstStartedPulling="2025-11-24 14:19:08.61627678 +0000 UTC m=+10.672214874" lastFinishedPulling="2025-11-24 14:19:18.102186599 +0000 UTC m=+20.158124692" observedRunningTime="2025-11-24 14:19:18.362714759 +0000 UTC m=+20.418652870" watchObservedRunningTime="2025-11-24 14:19:29.370482081 +0000 UTC m=+31.426420183"
	Nov 24 14:19:36 no-preload-444317 kubelet[780]: I1124 14:19:36.371042     780 scope.go:117] "RemoveContainer" containerID="9edacfe959f782420519cd918af883fab72dba651cea4c1003317aa7dbb5aee2"
	Nov 24 14:19:38 no-preload-444317 kubelet[780]: I1124 14:19:38.561040     780 scope.go:117] "RemoveContainer" containerID="867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1"
	Nov 24 14:19:38 no-preload-444317 kubelet[780]: E1124 14:19:38.561756     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8pxxl_kubernetes-dashboard(a2981b6c-eb71-4d4c-b08d-ba24f8243546)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl" podUID="a2981b6c-eb71-4d4c-b08d-ba24f8243546"
	Nov 24 14:19:52 no-preload-444317 kubelet[780]: I1124 14:19:52.113919     780 scope.go:117] "RemoveContainer" containerID="867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1"
	Nov 24 14:19:52 no-preload-444317 kubelet[780]: I1124 14:19:52.419329     780 scope.go:117] "RemoveContainer" containerID="867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1"
	Nov 24 14:19:52 no-preload-444317 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:19:52 no-preload-444317 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:19:52 no-preload-444317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4037ee2765bda709b510d5f015e77323b584c6dd7204ad7c638918dcd2628c45] <==
	2025/11/24 14:19:18 Using namespace: kubernetes-dashboard
	2025/11/24 14:19:18 Using in-cluster config to connect to apiserver
	2025/11/24 14:19:18 Using secret token for csrf signing
	2025/11/24 14:19:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:19:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:19:18 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 14:19:18 Generating JWE encryption key
	2025/11/24 14:19:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:19:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:19:18 Initializing JWE encryption key from synchronized object
	2025/11/24 14:19:18 Creating in-cluster Sidecar client
	2025/11/24 14:19:18 Serving insecurely on HTTP port: 9090
	2025/11/24 14:19:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:19:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:19:18 Starting overwatch
	
	
	==> storage-provisioner [7500421b8d518959966543c2fb44123cf1e925d09b9f3a19358de4f5ccaf03f5] <==
	I1124 14:19:36.454041       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:19:36.467287       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:19:36.467463       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:19:36.469758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:39.925350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:44.185780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:47.784302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:50.837993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:53.868372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:53.885985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:19:53.886143       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:19:53.886300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-444317_01c6efbd-497c-4bba-bc5a-ac22cf644059!
	I1124 14:19:53.886568       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32afcba2-0797-489f-b777-85af3a10990a", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-444317_01c6efbd-497c-4bba-bc5a-ac22cf644059 became leader
	W1124 14:19:53.897165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:53.924795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:19:53.987460       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-444317_01c6efbd-497c-4bba-bc5a-ac22cf644059!
	W1124 14:19:55.929638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:55.937085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9edacfe959f782420519cd918af883fab72dba651cea4c1003317aa7dbb5aee2] <==
	I1124 14:19:05.803675       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:19:35.806194       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-444317 -n no-preload-444317
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-444317 -n no-preload-444317: exit status 2 (522.36629ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-444317 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-444317
helpers_test.go:243: (dbg) docker inspect no-preload-444317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce",
	        "Created": "2025-11-24T14:17:08.709891648Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196004,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:18:51.022201393Z",
	            "FinishedAt": "2025-11-24T14:18:50.169456401Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/hostname",
	        "HostsPath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/hosts",
	        "LogPath": "/var/lib/docker/containers/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce-json.log",
	        "Name": "/no-preload-444317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-444317:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-444317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce",
	                "LowerDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5efc0bfe8b92c8f524d5dc30bc92e055435f884ab7bf2fa08436557c135aef1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-444317",
	                "Source": "/var/lib/docker/volumes/no-preload-444317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-444317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-444317",
	                "name.minikube.sigs.k8s.io": "no-preload-444317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bf99d0dab9c1b251e90cd738e9c0b89b6a83525451730ff8e575e7a1689d4cb9",
	            "SandboxKey": "/var/run/docker/netns/bf99d0dab9c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-444317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:8a:56:48:3f:9e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02f1d732c24a57a6012dfe448076c210da6d01bbcb8679ec8ce3692995d11521",
	                    "EndpointID": "e0dab7e3a24274195aa31d28438fb53bf57c22731fc13e8bca0fbcf1428a37b3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-444317",
	                        "ade20648158a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-444317 -n no-preload-444317
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-444317 -n no-preload-444317: exit status 2 (458.992845ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-444317 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-444317 logs -n 25: (1.636776948s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-097221 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-097221    │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ delete  │ -p cert-options-097221                                                                                                                                                                                                                        │ cert-options-097221    │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:14 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:14 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │                     │
	│ stop    │ -p old-k8s-version-706771 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:15 UTC │
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:16 UTC │
	│ image   │ old-k8s-version-706771 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │ 24 Nov 25 14:16 UTC │
	│ pause   │ -p old-k8s-version-706771 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │                     │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-032076 │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771 │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:18 UTC │
	│ delete  │ -p cert-expiration-032076                                                                                                                                                                                                                     │ cert-expiration-032076 │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293     │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-444317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │                     │
	│ stop    │ -p no-preload-444317 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ addons  │ enable dashboard -p no-preload-444317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-720293     │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ stop    │ -p embed-certs-720293 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-720293     │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-720293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-720293     │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293     │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ image   │ no-preload-444317 image list --format=json                                                                                                                                                                                                    │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ pause   │ -p no-preload-444317 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-444317      │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:19:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:19:32.542428  198824 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:19:32.542789  198824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:19:32.542803  198824 out.go:374] Setting ErrFile to fd 2...
	I1124 14:19:32.542810  198824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:19:32.543105  198824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:19:32.543572  198824 out.go:368] Setting JSON to false
	I1124 14:19:32.544969  198824 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7324,"bootTime":1763986649,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:19:32.545048  198824 start.go:143] virtualization:  
	I1124 14:19:32.548403  198824 out.go:179] * [embed-certs-720293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:19:32.552315  198824 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:19:32.552432  198824 notify.go:221] Checking for updates...
	I1124 14:19:32.558538  198824 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:19:32.561614  198824 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:19:32.564572  198824 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:19:32.567744  198824 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:19:32.570620  198824 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:19:32.574068  198824 config.go:182] Loaded profile config "embed-certs-720293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:19:32.574620  198824 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:19:32.600910  198824 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:19:32.601026  198824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:19:32.665060  198824 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:19:32.65533038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:19:32.665267  198824 docker.go:319] overlay module found
	I1124 14:19:32.670237  198824 out.go:179] * Using the docker driver based on existing profile
	I1124 14:19:32.672984  198824 start.go:309] selected driver: docker
	I1124 14:19:32.673002  198824 start.go:927] validating driver "docker" against &{Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:19:32.673100  198824 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:19:32.673896  198824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:19:32.730978  198824 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:19:32.721300318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:19:32.731326  198824 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:19:32.731400  198824 cni.go:84] Creating CNI manager for ""
	I1124 14:19:32.731458  198824 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:19:32.731500  198824 start.go:353] cluster config:
	{Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:19:32.736736  198824 out.go:179] * Starting "embed-certs-720293" primary control-plane node in "embed-certs-720293" cluster
	I1124 14:19:32.739533  198824 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:19:32.742433  198824 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:19:32.745317  198824 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:19:32.745403  198824 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:19:32.745448  198824 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:19:32.745674  198824 cache.go:65] Caching tarball of preloaded images
	I1124 14:19:32.745772  198824 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:19:32.745784  198824 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:19:32.745922  198824 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/config.json ...
	I1124 14:19:32.771887  198824 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:19:32.771911  198824 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:19:32.771932  198824 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:19:32.771962  198824 start.go:360] acquireMachinesLock for embed-certs-720293: {Name:mk63d8a86030ce5af3799b85ca4bd5722aa0f10b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:19:32.772027  198824 start.go:364] duration metric: took 43.11µs to acquireMachinesLock for "embed-certs-720293"
	I1124 14:19:32.772049  198824 start.go:96] Skipping create...Using existing machine configuration
	I1124 14:19:32.772058  198824 fix.go:54] fixHost starting: 
	I1124 14:19:32.772311  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:32.790150  198824 fix.go:112] recreateIfNeeded on embed-certs-720293: state=Stopped err=<nil>
	W1124 14:19:32.790179  198824 fix.go:138] unexpected machine state, will restart: <nil>
	W1124 14:19:31.567589  195877 pod_ready.go:104] pod "coredns-66bc5c9577-lrh58" is not "Ready", error: <nil>
	W1124 14:19:33.571844  195877 pod_ready.go:104] pod "coredns-66bc5c9577-lrh58" is not "Ready", error: <nil>
	I1124 14:19:32.793455  198824 out.go:252] * Restarting existing docker container for "embed-certs-720293" ...
	I1124 14:19:32.793556  198824 cli_runner.go:164] Run: docker start embed-certs-720293
	I1124 14:19:33.072202  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:33.092725  198824 kic.go:430] container "embed-certs-720293" state is running.
	I1124 14:19:33.093184  198824 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-720293
	I1124 14:19:33.115490  198824 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/config.json ...
	I1124 14:19:33.115706  198824 machine.go:94] provisionDockerMachine start ...
	I1124 14:19:33.115764  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:33.144021  198824 main.go:143] libmachine: Using SSH client type: native
	I1124 14:19:33.144485  198824 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1124 14:19:33.144499  198824 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:19:33.145076  198824 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49890->127.0.0.1:33073: read: connection reset by peer
	I1124 14:19:36.299179  198824 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-720293
	
	I1124 14:19:36.299203  198824 ubuntu.go:182] provisioning hostname "embed-certs-720293"
	I1124 14:19:36.299265  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:36.321875  198824 main.go:143] libmachine: Using SSH client type: native
	I1124 14:19:36.322195  198824 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1124 14:19:36.322212  198824 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-720293 && echo "embed-certs-720293" | sudo tee /etc/hostname
	I1124 14:19:36.502653  198824 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-720293
	
	I1124 14:19:36.502770  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:36.525398  198824 main.go:143] libmachine: Using SSH client type: native
	I1124 14:19:36.525715  198824 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1124 14:19:36.525737  198824 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-720293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-720293/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-720293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:19:36.679526  198824 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:19:36.679564  198824 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:19:36.679609  198824 ubuntu.go:190] setting up certificates
	I1124 14:19:36.679620  198824 provision.go:84] configureAuth start
	I1124 14:19:36.679693  198824 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-720293
	I1124 14:19:36.697153  198824 provision.go:143] copyHostCerts
	I1124 14:19:36.697225  198824 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:19:36.697244  198824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:19:36.697320  198824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:19:36.697483  198824 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:19:36.697496  198824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:19:36.697532  198824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:19:36.697610  198824 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:19:36.697621  198824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:19:36.697648  198824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:19:36.697712  198824 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.embed-certs-720293 san=[127.0.0.1 192.168.85.2 embed-certs-720293 localhost minikube]
	I1124 14:19:36.961870  198824 provision.go:177] copyRemoteCerts
	I1124 14:19:36.961939  198824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:19:36.961983  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:36.982373  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:37.095183  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:19:37.114029  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 14:19:37.135747  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:19:37.154343  198824 provision.go:87] duration metric: took 474.697114ms to configureAuth
	I1124 14:19:37.154372  198824 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:19:37.154594  198824 config.go:182] Loaded profile config "embed-certs-720293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:19:37.154716  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:37.172876  198824 main.go:143] libmachine: Using SSH client type: native
	I1124 14:19:37.173189  198824 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1124 14:19:37.173208  198824 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1124 14:19:36.068284  195877 pod_ready.go:104] pod "coredns-66bc5c9577-lrh58" is not "Ready", error: <nil>
	I1124 14:19:37.567615  195877 pod_ready.go:94] pod "coredns-66bc5c9577-lrh58" is "Ready"
	I1124 14:19:37.567641  195877 pod_ready.go:86] duration metric: took 31.005915484s for pod "coredns-66bc5c9577-lrh58" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.570257  195877 pod_ready.go:83] waiting for pod "etcd-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.574549  195877 pod_ready.go:94] pod "etcd-no-preload-444317" is "Ready"
	I1124 14:19:37.574626  195877 pod_ready.go:86] duration metric: took 4.347385ms for pod "etcd-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.577335  195877 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.582417  195877 pod_ready.go:94] pod "kube-apiserver-no-preload-444317" is "Ready"
	I1124 14:19:37.582439  195877 pod_ready.go:86] duration metric: took 5.081047ms for pod "kube-apiserver-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.584983  195877 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.766306  195877 pod_ready.go:94] pod "kube-controller-manager-no-preload-444317" is "Ready"
	I1124 14:19:37.766330  195877 pod_ready.go:86] duration metric: took 181.326236ms for pod "kube-controller-manager-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:37.966603  195877 pod_ready.go:83] waiting for pod "kube-proxy-m4fb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:38.366582  195877 pod_ready.go:94] pod "kube-proxy-m4fb4" is "Ready"
	I1124 14:19:38.366610  195877 pod_ready.go:86] duration metric: took 399.984444ms for pod "kube-proxy-m4fb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:38.567099  195877 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:38.966162  195877 pod_ready.go:94] pod "kube-scheduler-no-preload-444317" is "Ready"
	I1124 14:19:38.966188  195877 pod_ready.go:86] duration metric: took 399.061177ms for pod "kube-scheduler-no-preload-444317" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:19:38.966201  195877 pod_ready.go:40] duration metric: took 32.410793769s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:19:39.044825  195877 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:19:39.049783  195877 out.go:179] * Done! kubectl is now configured to use "no-preload-444317" cluster and "default" namespace by default
	I1124 14:19:37.589217  198824 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:19:37.589280  198824 machine.go:97] duration metric: took 4.473564566s to provisionDockerMachine
	I1124 14:19:37.589309  198824 start.go:293] postStartSetup for "embed-certs-720293" (driver="docker")
	I1124 14:19:37.589365  198824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:19:37.589472  198824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:19:37.589540  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:37.613741  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:37.719527  198824 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:19:37.722736  198824 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:19:37.722762  198824 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:19:37.722773  198824 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:19:37.722827  198824 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:19:37.722911  198824 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:19:37.723014  198824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:19:37.730458  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:19:37.747764  198824 start.go:296] duration metric: took 158.423791ms for postStartSetup
	I1124 14:19:37.747843  198824 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:19:37.747899  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:37.764668  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:37.872442  198824 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:19:37.877270  198824 fix.go:56] duration metric: took 5.105204695s for fixHost
	I1124 14:19:37.877294  198824 start.go:83] releasing machines lock for "embed-certs-720293", held for 5.10525464s
	I1124 14:19:37.877363  198824 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-720293
	I1124 14:19:37.895319  198824 ssh_runner.go:195] Run: cat /version.json
	I1124 14:19:37.895463  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:37.895736  198824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:19:37.895819  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:37.917164  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:37.929188  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:38.028050  198824 ssh_runner.go:195] Run: systemctl --version
	I1124 14:19:38.130840  198824 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:19:38.178795  198824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:19:38.183595  198824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:19:38.183677  198824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:19:38.191529  198824 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:19:38.191555  198824 start.go:496] detecting cgroup driver to use...
	I1124 14:19:38.191586  198824 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:19:38.191646  198824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:19:38.209318  198824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:19:38.222714  198824 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:19:38.222784  198824 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:19:38.238683  198824 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:19:38.252044  198824 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:19:38.373600  198824 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:19:38.496108  198824 docker.go:234] disabling docker service ...
	I1124 14:19:38.496173  198824 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:19:38.511383  198824 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:19:38.525130  198824 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:19:38.660445  198824 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:19:38.774276  198824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:19:38.786884  198824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:19:38.801457  198824 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:19:38.801568  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.811459  198824 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:19:38.811556  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.821978  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.832285  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.841955  198824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:19:38.850985  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.861973  198824 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.871807  198824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:19:38.880613  198824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:19:38.888526  198824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:19:38.896245  198824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:19:39.016811  198824 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:19:39.236589  198824 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:19:39.236668  198824 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:19:39.240980  198824 start.go:564] Will wait 60s for crictl version
	I1124 14:19:39.241046  198824 ssh_runner.go:195] Run: which crictl
	I1124 14:19:39.245598  198824 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:19:39.289323  198824 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:19:39.289409  198824 ssh_runner.go:195] Run: crio --version
	I1124 14:19:39.334030  198824 ssh_runner.go:195] Run: crio --version
	I1124 14:19:39.390321  198824 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:19:39.393337  198824 cli_runner.go:164] Run: docker network inspect embed-certs-720293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:19:39.430091  198824 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:19:39.434519  198824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:19:39.449809  198824 kubeadm.go:884] updating cluster {Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:19:39.449948  198824 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:19:39.450004  198824 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:19:39.501878  198824 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:19:39.501899  198824 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:19:39.501956  198824 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:19:39.542244  198824 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:19:39.542265  198824 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:19:39.542277  198824 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 14:19:39.542376  198824 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-720293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:19:39.542455  198824 ssh_runner.go:195] Run: crio config
	I1124 14:19:39.619095  198824 cni.go:84] Creating CNI manager for ""
	I1124 14:19:39.619126  198824 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:19:39.619170  198824 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:19:39.619221  198824 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-720293 NodeName:embed-certs-720293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:19:39.619498  198824 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-720293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:19:39.619601  198824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:19:39.627783  198824 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:19:39.627864  198824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:19:39.635712  198824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 14:19:39.649324  198824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:19:39.661654  198824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1124 14:19:39.675333  198824 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:19:39.679084  198824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:19:39.689419  198824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:19:39.807314  198824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:19:39.823834  198824 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293 for IP: 192.168.85.2
	I1124 14:19:39.823853  198824 certs.go:195] generating shared ca certs ...
	I1124 14:19:39.823870  198824 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:19:39.824011  198824 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:19:39.824054  198824 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:19:39.824061  198824 certs.go:257] generating profile certs ...
	I1124 14:19:39.824153  198824 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/client.key
	I1124 14:19:39.824216  198824 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.key.8c3742eb
	I1124 14:19:39.824262  198824 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.key
	I1124 14:19:39.824364  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:19:39.824397  198824 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:19:39.824405  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:19:39.824432  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:19:39.824455  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:19:39.824477  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:19:39.824539  198824 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:19:39.825139  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:19:39.846635  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:19:39.869884  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:19:39.892098  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:19:39.914646  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 14:19:39.943953  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:19:39.964855  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:19:39.986206  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/embed-certs-720293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:19:40.021362  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:19:40.046247  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:19:40.068007  198824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:19:40.096461  198824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:19:40.110357  198824 ssh_runner.go:195] Run: openssl version
	I1124 14:19:40.119251  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:19:40.129450  198824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:19:40.134614  198824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:19:40.134724  198824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:19:40.177363  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:19:40.185991  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:19:40.195733  198824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:19:40.199326  198824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:19:40.199527  198824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:19:40.240356  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:19:40.248199  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:19:40.256359  198824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:19:40.259879  198824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:19:40.259959  198824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:19:40.300877  198824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:19:40.308667  198824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:19:40.313459  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:19:40.355318  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:19:40.397024  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:19:40.449225  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:19:40.495179  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:19:40.549766  198824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:19:40.613439  198824 kubeadm.go:401] StartCluster: {Name:embed-certs-720293 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-720293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:19:40.613585  198824 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:19:40.613666  198824 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:19:40.710462  198824 cri.go:89] found id: "7a20914603732648c5d9ff34200e808b2002ae00dc4000fe37adb370011a3888"
	I1124 14:19:40.710542  198824 cri.go:89] found id: "7634741324dd1d91cc93df52ab62f4e54882e2826f3185dee5ff5c38bdffd3cf"
	I1124 14:19:40.710602  198824 cri.go:89] found id: "43d901c75e4d3ea7cfdd826b2f38e870e2be39de21570400fd187f7a2239344b"
	I1124 14:19:40.710638  198824 cri.go:89] found id: "da3f0798a706df28b161fc15c24ff964503411fba4af93d09ab0786003dc32ea"
	I1124 14:19:40.710677  198824 cri.go:89] found id: ""
	I1124 14:19:40.710748  198824 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 14:19:40.728643  198824 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:19:40Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:19:40.728775  198824 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:19:40.742860  198824 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:19:40.742927  198824 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:19:40.743005  198824 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:19:40.751795  198824 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:19:40.752384  198824 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-720293" does not appear in /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:19:40.752694  198824 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-2805/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-720293" cluster setting kubeconfig missing "embed-certs-720293" context setting]
	I1124 14:19:40.753172  198824 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:19:40.754735  198824 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:19:40.767370  198824 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 14:19:40.767437  198824 kubeadm.go:602] duration metric: took 24.491254ms to restartPrimaryControlPlane
	I1124 14:19:40.767461  198824 kubeadm.go:403] duration metric: took 154.032236ms to StartCluster
	I1124 14:19:40.767489  198824 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:19:40.767566  198824 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:19:40.768802  198824 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:19:40.769041  198824 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:19:40.769377  198824 config.go:182] Loaded profile config "embed-certs-720293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:19:40.769534  198824 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:19:40.769609  198824 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-720293"
	I1124 14:19:40.769628  198824 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-720293"
	W1124 14:19:40.769634  198824 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:19:40.769656  198824 host.go:66] Checking if "embed-certs-720293" exists ...
	I1124 14:19:40.769723  198824 addons.go:70] Setting dashboard=true in profile "embed-certs-720293"
	I1124 14:19:40.769738  198824 addons.go:239] Setting addon dashboard=true in "embed-certs-720293"
	W1124 14:19:40.769745  198824 addons.go:248] addon dashboard should already be in state true
	I1124 14:19:40.769779  198824 host.go:66] Checking if "embed-certs-720293" exists ...
	I1124 14:19:40.770185  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:40.770293  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:40.770701  198824 addons.go:70] Setting default-storageclass=true in profile "embed-certs-720293"
	I1124 14:19:40.770746  198824 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-720293"
	I1124 14:19:40.771015  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:40.773161  198824 out.go:179] * Verifying Kubernetes components...
	I1124 14:19:40.779345  198824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:19:40.821065  198824 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 14:19:40.825657  198824 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:19:40.828583  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 14:19:40.828611  198824 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 14:19:40.828680  198824 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:19:40.828682  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:40.834221  198824 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:19:40.834243  198824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:19:40.834318  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:40.839295  198824 addons.go:239] Setting addon default-storageclass=true in "embed-certs-720293"
	W1124 14:19:40.839316  198824 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:19:40.839341  198824 host.go:66] Checking if "embed-certs-720293" exists ...
	I1124 14:19:40.839898  198824 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:19:40.892711  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:40.895419  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:40.899567  198824 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:19:40.899585  198824 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:19:40.899646  198824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:19:40.928769  198824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:19:41.099533  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 14:19:41.099568  198824 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 14:19:41.153214  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 14:19:41.153283  198824 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 14:19:41.168033  198824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:19:41.178627  198824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:19:41.208950  198824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:19:41.220419  198824 node_ready.go:35] waiting up to 6m0s for node "embed-certs-720293" to be "Ready" ...
	I1124 14:19:41.252527  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 14:19:41.252595  198824 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 14:19:41.328802  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 14:19:41.328871  198824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 14:19:41.393726  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 14:19:41.393791  198824 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 14:19:41.455744  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 14:19:41.455808  198824 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 14:19:41.535649  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 14:19:41.535678  198824 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 14:19:41.556977  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 14:19:41.557002  198824 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 14:19:41.579295  198824 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:19:41.579323  198824 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 14:19:41.601426  198824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:19:45.514808  198824 node_ready.go:49] node "embed-certs-720293" is "Ready"
	I1124 14:19:45.514842  198824 node_ready.go:38] duration metric: took 4.29429741s for node "embed-certs-720293" to be "Ready" ...
	I1124 14:19:45.514857  198824 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:19:45.514913  198824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:19:45.699514  198824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.520804756s)
	I1124 14:19:47.104229  198824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.895198427s)
	I1124 14:19:47.104342  198824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.50288546s)
	I1124 14:19:47.104470  198824 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.589541456s)
	I1124 14:19:47.104484  198824 api_server.go:72] duration metric: took 6.33539077s to wait for apiserver process to appear ...
	I1124 14:19:47.104490  198824 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:19:47.104506  198824 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:19:47.107273  198824 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-720293 addons enable metrics-server
	
	I1124 14:19:47.110329  198824 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1124 14:19:47.113221  198824 addons.go:530] duration metric: took 6.343687145s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1124 14:19:47.115764  198824 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:19:47.115787  198824 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:19:47.605596  198824 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:19:47.618056  198824 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 14:19:47.619385  198824 api_server.go:141] control plane version: v1.34.1
	I1124 14:19:47.619411  198824 api_server.go:131] duration metric: took 514.915785ms to wait for apiserver health ...
	I1124 14:19:47.619422  198824 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:19:47.624624  198824 system_pods.go:59] 8 kube-system pods found
	I1124 14:19:47.624668  198824 system_pods.go:61] "coredns-66bc5c9577-6nztq" [9fbc8e0e-67a3-4086-aa1d-29b18f0c8d19] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:19:47.624678  198824 system_pods.go:61] "etcd-embed-certs-720293" [bc16ed26-fa6f-4c97-836c-c9f0b7f731aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:19:47.624683  198824 system_pods.go:61] "kindnet-ft88w" [7966e19b-c109-4372-8b9d-53d6f04dd7e7] Running
	I1124 14:19:47.624690  198824 system_pods.go:61] "kube-apiserver-embed-certs-720293" [8cdcb85e-986c-4ce2-b890-a8d96ea344c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:19:47.624696  198824 system_pods.go:61] "kube-controller-manager-embed-certs-720293" [9e5790d2-8178-4215-9b38-ffedd4359966] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:19:47.624701  198824 system_pods.go:61] "kube-proxy-pwpl4" [9404897b-5bae-4f03-987a-01e4ec7795a9] Running
	I1124 14:19:47.624707  198824 system_pods.go:61] "kube-scheduler-embed-certs-720293" [a289a525-28c8-45a8-a4e4-dde78e1ef777] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:19:47.624710  198824 system_pods.go:61] "storage-provisioner" [f6c6574d-20bd-49e9-86e6-b0d81b3490c6] Running
	I1124 14:19:47.624716  198824 system_pods.go:74] duration metric: took 5.289451ms to wait for pod list to return data ...
	I1124 14:19:47.624733  198824 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:19:47.627388  198824 default_sa.go:45] found service account: "default"
	I1124 14:19:47.627414  198824 default_sa.go:55] duration metric: took 2.674572ms for default service account to be created ...
	I1124 14:19:47.627423  198824 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:19:47.631043  198824 system_pods.go:86] 8 kube-system pods found
	I1124 14:19:47.631077  198824 system_pods.go:89] "coredns-66bc5c9577-6nztq" [9fbc8e0e-67a3-4086-aa1d-29b18f0c8d19] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:19:47.631087  198824 system_pods.go:89] "etcd-embed-certs-720293" [bc16ed26-fa6f-4c97-836c-c9f0b7f731aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:19:47.631092  198824 system_pods.go:89] "kindnet-ft88w" [7966e19b-c109-4372-8b9d-53d6f04dd7e7] Running
	I1124 14:19:47.631099  198824 system_pods.go:89] "kube-apiserver-embed-certs-720293" [8cdcb85e-986c-4ce2-b890-a8d96ea344c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:19:47.631106  198824 system_pods.go:89] "kube-controller-manager-embed-certs-720293" [9e5790d2-8178-4215-9b38-ffedd4359966] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:19:47.631110  198824 system_pods.go:89] "kube-proxy-pwpl4" [9404897b-5bae-4f03-987a-01e4ec7795a9] Running
	I1124 14:19:47.631117  198824 system_pods.go:89] "kube-scheduler-embed-certs-720293" [a289a525-28c8-45a8-a4e4-dde78e1ef777] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:19:47.631121  198824 system_pods.go:89] "storage-provisioner" [f6c6574d-20bd-49e9-86e6-b0d81b3490c6] Running
	I1124 14:19:47.631130  198824 system_pods.go:126] duration metric: took 3.70047ms to wait for k8s-apps to be running ...
	I1124 14:19:47.631145  198824 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:19:47.631203  198824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:19:47.645996  198824 system_svc.go:56] duration metric: took 14.842602ms WaitForService to wait for kubelet
	I1124 14:19:47.646036  198824 kubeadm.go:587] duration metric: took 6.876931008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:19:47.646057  198824 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:19:47.651443  198824 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:19:47.651475  198824 node_conditions.go:123] node cpu capacity is 2
	I1124 14:19:47.651488  198824 node_conditions.go:105] duration metric: took 5.425567ms to run NodePressure ...
	I1124 14:19:47.651501  198824 start.go:242] waiting for startup goroutines ...
	I1124 14:19:47.651509  198824 start.go:247] waiting for cluster config update ...
	I1124 14:19:47.651520  198824 start.go:256] writing updated cluster config ...
	I1124 14:19:47.651817  198824 ssh_runner.go:195] Run: rm -f paused
	I1124 14:19:47.656578  198824 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:19:47.660860  198824 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6nztq" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 14:19:49.666715  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:19:51.672376  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:19:54.169052  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:19:56.169744  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.022562792Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.027247117Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.027432136Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.027515641Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.032786072Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.032949725Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.03302591Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.037912567Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.037948022Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.037974508Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.045855576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:19:46 no-preload-444317 crio[659]: time="2025-11-24T14:19:46.046030306Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.115015623Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6a29eab8-f7da-41e6-87cf-18ba5ac4b804 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.115880502Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6ca4d172-f4e0-46ed-81a8-4661cd7a8c74 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.117778852Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl/dashboard-metrics-scraper" id=2baac85b-afbb-4362-a572-4029406575fa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.11803428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.187952327Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.189651955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.258999387Z" level=info msg="Created container 1e341c0d44a6f583b56404ba3cbb8e6d190a6bead92ac62577f51ce6b821e1ba: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl/dashboard-metrics-scraper" id=2baac85b-afbb-4362-a572-4029406575fa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.264346939Z" level=info msg="Starting container: 1e341c0d44a6f583b56404ba3cbb8e6d190a6bead92ac62577f51ce6b821e1ba" id=d2fe1369-5cb6-4caa-9a6e-a9d18a47b1d8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:19:52 no-preload-444317 conmon[1727]: conmon 1e341c0d44a6f583b564 <ninfo>: container 1729 exited with status 1
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.277072185Z" level=info msg="Started container" PID=1729 containerID=1e341c0d44a6f583b56404ba3cbb8e6d190a6bead92ac62577f51ce6b821e1ba description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl/dashboard-metrics-scraper id=d2fe1369-5cb6-4caa-9a6e-a9d18a47b1d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f13c1700211ca9495f08248319168f073ce0892cf683f5035d6d89944e19be74
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.424424387Z" level=info msg="Removing container: 867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1" id=7dbb0103-93a3-4f31-8958-1fd933befae9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.439419458Z" level=info msg="Error loading conmon cgroup of container 867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1: cgroup deleted" id=7dbb0103-93a3-4f31-8958-1fd933befae9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:19:52 no-preload-444317 crio[659]: time="2025-11-24T14:19:52.446722354Z" level=info msg="Removed container 867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl/dashboard-metrics-scraper" id=7dbb0103-93a3-4f31-8958-1fd933befae9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1e341c0d44a6f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   f13c1700211ca       dashboard-metrics-scraper-6ffb444bf9-8pxxl   kubernetes-dashboard
	7500421b8d518       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago      Running             storage-provisioner         2                   e618c6f2ab6b3       storage-provisioner                          kube-system
	4037ee2765bda       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago      Running             kubernetes-dashboard        0                   aab56f53d3211       kubernetes-dashboard-855c9754f9-xdmjc        kubernetes-dashboard
	91d70065b14a0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   f1e2c0d4fe590       coredns-66bc5c9577-lrh58                     kube-system
	8eea84e17d64e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   0dcbb17a347a6       busybox                                      default
	b98558ac51e4c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   ccfb13bd42e1a       kube-proxy-m4fb4                             kube-system
	9edacfe959f78       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago      Exited              storage-provisioner         1                   e618c6f2ab6b3       storage-provisioner                          kube-system
	1c3203cd2d06f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   b4346711e9408       kindnet-zwxh6                                kube-system
	6fb3ae76e7269       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   85c8d5d569f2d       kube-scheduler-no-preload-444317             kube-system
	033ef6fd0ada3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   ea7a96cb69826       etcd-no-preload-444317                       kube-system
	838b8a2e9df2c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   ad22c589251fb       kube-apiserver-no-preload-444317             kube-system
	9fae20cc90ea0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   012905bf7e225       kube-controller-manager-no-preload-444317    kube-system
	
	
	==> coredns [91d70065b14a06021db8c9a017b68c7833b9f540e25841cd0422a6eac3a15b51] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33049 - 6681 "HINFO IN 6435730273369765947.1452196705638174724. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024408072s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-444317
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-444317
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=no-preload-444317
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_18_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-444317
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:19:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:19:35 +0000   Mon, 24 Nov 2025 14:17:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:19:35 +0000   Mon, 24 Nov 2025 14:17:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:19:35 +0000   Mon, 24 Nov 2025 14:17:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:19:35 +0000   Mon, 24 Nov 2025 14:18:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-444317
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                7f3cb54f-ba1b-4064-b92e-1b7768ad96c4
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-lrh58                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-no-preload-444317                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-zwxh6                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-444317              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-444317     200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-m4fb4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-444317              100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8pxxl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xdmjc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 111s                   kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Warning  CgroupV1                 2m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node no-preload-444317 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node no-preload-444317 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node no-preload-444317 status is now: NodeHasSufficientPID
	  Normal   Starting                 118s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 118s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  117s                   kubelet          Node no-preload-444317 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s                   kubelet          Node no-preload-444317 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s                   kubelet          Node no-preload-444317 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           114s                   node-controller  Node no-preload-444317 event: Registered Node no-preload-444317 in Controller
	  Normal   NodeReady                97s                    kubelet          Node no-preload-444317 status is now: NodeReady
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node no-preload-444317 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node no-preload-444317 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node no-preload-444317 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                    node-controller  Node no-preload-444317 event: Registered Node no-preload-444317 in Controller
	
	
	==> dmesg <==
	[Nov24 13:54] overlayfs: idmapped layers are currently not supported
	[Nov24 13:56] overlayfs: idmapped layers are currently not supported
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	[ +49.916713] overlayfs: idmapped layers are currently not supported
	[Nov24 14:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [033ef6fd0ada365a2ecc235eed62496fe7b0a609cd2b260dacf36429246eb827] <==
	{"level":"warn","ts":"2025-11-24T14:19:02.915818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:02.944459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:02.954050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:02.968908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:02.985985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.005020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.020072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.035104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.059193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.085552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.090690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.120684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.132581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.147513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.163412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.179063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.198124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.215905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.240160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.259154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.294703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.314375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.336102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.348140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:03.406933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57866","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:19:58 up  2:02,  0 user,  load average: 2.32, 2.82, 2.47
	Linux no-preload-444317 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1c3203cd2d06f35ce87a959266c2a7517112b74ce7421df704dc7f717e2c1e12] <==
	I1124 14:19:05.818985       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:19:05.819209       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:19:05.819326       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:19:05.819338       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:19:05.819348       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:19:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:19:06.017255       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:19:06.017347       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:19:06.017381       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:19:06.020412       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:19:36.018161       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:19:36.020617       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:19:36.020624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:19:36.020718       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:19:37.219883       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:19:37.219916       1 metrics.go:72] Registering metrics
	I1124 14:19:37.219976       1 controller.go:711] "Syncing nftables rules"
	I1124 14:19:46.022237       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:19:46.022282       1 main.go:301] handling current node
	I1124 14:19:56.025688       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:19:56.025722       1 main.go:301] handling current node
	
	
	==> kube-apiserver [838b8a2e9df2c5d45fa6bef18fa814af0df8f6efe64027561859a41453484af0] <==
	I1124 14:19:04.594753       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 14:19:04.597124       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 14:19:04.597224       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:19:04.597267       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:19:04.599441       1 aggregator.go:171] initial CRD sync complete...
	I1124 14:19:04.599451       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 14:19:04.599457       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:19:04.599463       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:19:04.600963       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:19:04.601190       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 14:19:04.601240       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 14:19:04.608507       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:19:04.612660       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1124 14:19:04.613592       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:19:05.008686       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:19:05.101376       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:19:05.181395       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:19:05.189560       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:19:05.352764       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:19:05.420052       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:19:05.790087       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.252.211"}
	I1124 14:19:05.865762       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.148.158"}
	I1124 14:19:07.818065       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:19:08.165475       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:19:08.275313       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9fae20cc90ea0e80a0e993b46f094ba9011120aee92f560781378c0ce54c97cb] <==
	I1124 14:19:07.838608       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-444317"
	I1124 14:19:07.838666       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:19:07.838676       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:19:07.838764       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 14:19:07.840831       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:19:07.844998       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 14:19:07.845592       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:19:07.847421       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 14:19:07.856098       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:19:07.857016       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:19:07.857577       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:19:07.857779       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:19:07.858136       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 14:19:07.858329       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:19:07.858398       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 14:19:07.858459       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 14:19:07.858516       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 14:19:07.858573       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:19:07.858800       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 14:19:07.864031       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:19:07.864133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 14:19:07.874788       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 14:19:07.884895       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:19:07.884982       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:19:07.885014       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [b98558ac51e4ca540f3cbbc6b2f05fe5584b13c8eb1c8764289a95ecdde989f6] <==
	I1124 14:19:06.005800       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:19:06.128127       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:19:06.238574       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:19:06.238751       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:19:06.238915       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:19:06.309777       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:19:06.309837       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:19:06.320782       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:19:06.321149       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:19:06.321171       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:19:06.331875       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:19:06.331896       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:19:06.332182       1 config.go:200] "Starting service config controller"
	I1124 14:19:06.332197       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:19:06.332480       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:19:06.332495       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:19:06.332908       1 config.go:309] "Starting node config controller"
	I1124 14:19:06.332923       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:19:06.332930       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:19:06.432577       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:19:06.432670       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:19:06.432701       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6fb3ae76e7269290f5063bc5ecd82c590e464d07c67d0618070feb631692598d] <==
	W1124 14:19:04.049306       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 14:19:04.455231       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:19:04.455269       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:19:04.469716       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:19:04.470725       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:19:04.470759       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:19:04.470784       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 14:19:04.547679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:19:04.547918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:19:04.547965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:19:04.548255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:19:04.548326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 14:19:04.548399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:19:04.548026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:19:04.548542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 14:19:04.548614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:19:04.548664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 14:19:04.563320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:19:04.563555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 14:19:04.563681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 14:19:04.564205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 14:19:04.564280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:19:04.564608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 14:19:04.564684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1124 14:19:04.571293       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:19:08 no-preload-444317 kubelet[780]: I1124 14:19:08.420422     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0ca9e6a2-e143-4b08-bfaf-a541eb0f842b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-xdmjc\" (UID: \"0ca9e6a2-e143-4b08-bfaf-a541eb0f842b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xdmjc"
	Nov 24 14:19:08 no-preload-444317 kubelet[780]: I1124 14:19:08.420641     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddrnh\" (UniqueName: \"kubernetes.io/projected/0ca9e6a2-e143-4b08-bfaf-a541eb0f842b-kube-api-access-ddrnh\") pod \"kubernetes-dashboard-855c9754f9-xdmjc\" (UID: \"0ca9e6a2-e143-4b08-bfaf-a541eb0f842b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xdmjc"
	Nov 24 14:19:08 no-preload-444317 kubelet[780]: W1124 14:19:08.605402     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/crio-aab56f53d3211a2d7015f2be240a988cd7cbcbe144d520e77aaab3f135c01e3e WatchSource:0}: Error finding container aab56f53d3211a2d7015f2be240a988cd7cbcbe144d520e77aaab3f135c01e3e: Status 404 returned error can't find the container with id aab56f53d3211a2d7015f2be240a988cd7cbcbe144d520e77aaab3f135c01e3e
	Nov 24 14:19:08 no-preload-444317 kubelet[780]: W1124 14:19:08.607180     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ade20648158abf5218a944caa623bf3f6036e6cac8f095be63310184940923ce/crio-f13c1700211ca9495f08248319168f073ce0892cf683f5035d6d89944e19be74 WatchSource:0}: Error finding container f13c1700211ca9495f08248319168f073ce0892cf683f5035d6d89944e19be74: Status 404 returned error can't find the container with id f13c1700211ca9495f08248319168f073ce0892cf683f5035d6d89944e19be74
	Nov 24 14:19:13 no-preload-444317 kubelet[780]: I1124 14:19:13.290866     780 scope.go:117] "RemoveContainer" containerID="fc15f4ee1ed99d843cb406e8a62c8ea2e5ba6ff8d1ff1051d38cd97570ed9ad2"
	Nov 24 14:19:14 no-preload-444317 kubelet[780]: I1124 14:19:14.294718     780 scope.go:117] "RemoveContainer" containerID="fc15f4ee1ed99d843cb406e8a62c8ea2e5ba6ff8d1ff1051d38cd97570ed9ad2"
	Nov 24 14:19:14 no-preload-444317 kubelet[780]: I1124 14:19:14.295592     780 scope.go:117] "RemoveContainer" containerID="39f55880affebdac3a628db015dc76fbc8a3e718929b1fa8903ab4a79db2bca4"
	Nov 24 14:19:14 no-preload-444317 kubelet[780]: E1124 14:19:14.299592     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8pxxl_kubernetes-dashboard(a2981b6c-eb71-4d4c-b08d-ba24f8243546)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl" podUID="a2981b6c-eb71-4d4c-b08d-ba24f8243546"
	Nov 24 14:19:15 no-preload-444317 kubelet[780]: I1124 14:19:15.298715     780 scope.go:117] "RemoveContainer" containerID="39f55880affebdac3a628db015dc76fbc8a3e718929b1fa8903ab4a79db2bca4"
	Nov 24 14:19:15 no-preload-444317 kubelet[780]: E1124 14:19:15.298861     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8pxxl_kubernetes-dashboard(a2981b6c-eb71-4d4c-b08d-ba24f8243546)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl" podUID="a2981b6c-eb71-4d4c-b08d-ba24f8243546"
	Nov 24 14:19:18 no-preload-444317 kubelet[780]: I1124 14:19:18.560156     780 scope.go:117] "RemoveContainer" containerID="39f55880affebdac3a628db015dc76fbc8a3e718929b1fa8903ab4a79db2bca4"
	Nov 24 14:19:18 no-preload-444317 kubelet[780]: E1124 14:19:18.560897     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8pxxl_kubernetes-dashboard(a2981b6c-eb71-4d4c-b08d-ba24f8243546)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl" podUID="a2981b6c-eb71-4d4c-b08d-ba24f8243546"
	Nov 24 14:19:29 no-preload-444317 kubelet[780]: I1124 14:19:29.114668     780 scope.go:117] "RemoveContainer" containerID="39f55880affebdac3a628db015dc76fbc8a3e718929b1fa8903ab4a79db2bca4"
	Nov 24 14:19:29 no-preload-444317 kubelet[780]: I1124 14:19:29.350139     780 scope.go:117] "RemoveContainer" containerID="39f55880affebdac3a628db015dc76fbc8a3e718929b1fa8903ab4a79db2bca4"
	Nov 24 14:19:29 no-preload-444317 kubelet[780]: I1124 14:19:29.350499     780 scope.go:117] "RemoveContainer" containerID="867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1"
	Nov 24 14:19:29 no-preload-444317 kubelet[780]: E1124 14:19:29.350716     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8pxxl_kubernetes-dashboard(a2981b6c-eb71-4d4c-b08d-ba24f8243546)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl" podUID="a2981b6c-eb71-4d4c-b08d-ba24f8243546"
	Nov 24 14:19:29 no-preload-444317 kubelet[780]: I1124 14:19:29.370499     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xdmjc" podStartSLOduration=11.884572263 podStartE2EDuration="21.370482081s" podCreationTimestamp="2025-11-24 14:19:08 +0000 UTC" firstStartedPulling="2025-11-24 14:19:08.61627678 +0000 UTC m=+10.672214874" lastFinishedPulling="2025-11-24 14:19:18.102186599 +0000 UTC m=+20.158124692" observedRunningTime="2025-11-24 14:19:18.362714759 +0000 UTC m=+20.418652870" watchObservedRunningTime="2025-11-24 14:19:29.370482081 +0000 UTC m=+31.426420183"
	Nov 24 14:19:36 no-preload-444317 kubelet[780]: I1124 14:19:36.371042     780 scope.go:117] "RemoveContainer" containerID="9edacfe959f782420519cd918af883fab72dba651cea4c1003317aa7dbb5aee2"
	Nov 24 14:19:38 no-preload-444317 kubelet[780]: I1124 14:19:38.561040     780 scope.go:117] "RemoveContainer" containerID="867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1"
	Nov 24 14:19:38 no-preload-444317 kubelet[780]: E1124 14:19:38.561756     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8pxxl_kubernetes-dashboard(a2981b6c-eb71-4d4c-b08d-ba24f8243546)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8pxxl" podUID="a2981b6c-eb71-4d4c-b08d-ba24f8243546"
	Nov 24 14:19:52 no-preload-444317 kubelet[780]: I1124 14:19:52.113919     780 scope.go:117] "RemoveContainer" containerID="867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1"
	Nov 24 14:19:52 no-preload-444317 kubelet[780]: I1124 14:19:52.419329     780 scope.go:117] "RemoveContainer" containerID="867c1c2bdba269643d79e651f22075350405e522cc23eb7a82d41b67b0d922a1"
	Nov 24 14:19:52 no-preload-444317 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:19:52 no-preload-444317 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:19:52 no-preload-444317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4037ee2765bda709b510d5f015e77323b584c6dd7204ad7c638918dcd2628c45] <==
	2025/11/24 14:19:18 Using namespace: kubernetes-dashboard
	2025/11/24 14:19:18 Using in-cluster config to connect to apiserver
	2025/11/24 14:19:18 Using secret token for csrf signing
	2025/11/24 14:19:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:19:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:19:18 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 14:19:18 Generating JWE encryption key
	2025/11/24 14:19:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:19:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:19:18 Initializing JWE encryption key from synchronized object
	2025/11/24 14:19:18 Creating in-cluster Sidecar client
	2025/11/24 14:19:18 Serving insecurely on HTTP port: 9090
	2025/11/24 14:19:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:19:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:19:18 Starting overwatch
	
	
	==> storage-provisioner [7500421b8d518959966543c2fb44123cf1e925d09b9f3a19358de4f5ccaf03f5] <==
	I1124 14:19:36.454041       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:19:36.467287       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:19:36.467463       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:19:36.469758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:39.925350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:44.185780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:47.784302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:50.837993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:53.868372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:53.885985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:19:53.886143       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:19:53.886300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-444317_01c6efbd-497c-4bba-bc5a-ac22cf644059!
	I1124 14:19:53.886568       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32afcba2-0797-489f-b777-85af3a10990a", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-444317_01c6efbd-497c-4bba-bc5a-ac22cf644059 became leader
	W1124 14:19:53.897165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:53.924795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:19:53.987460       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-444317_01c6efbd-497c-4bba-bc5a-ac22cf644059!
	W1124 14:19:55.929638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:55.937085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:57.947334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:19:57.969404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9edacfe959f782420519cd918af883fab72dba651cea4c1003317aa7dbb5aee2] <==
	I1124 14:19:05.803675       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:19:35.806194       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-444317 -n no-preload-444317
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-444317 -n no-preload-444317: exit status 2 (558.258595ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-444317 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-720293 --alsologtostderr -v=1
E1124 14:20:42.346783    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-720293 --alsologtostderr -v=1: exit status 80 (2.534539359s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-720293 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:20:42.048153  204698 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:20:42.048271  204698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:20:42.048277  204698 out.go:374] Setting ErrFile to fd 2...
	I1124 14:20:42.048283  204698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:20:42.048662  204698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:20:42.053786  204698 out.go:368] Setting JSON to false
	I1124 14:20:42.053869  204698 mustload.go:66] Loading cluster: embed-certs-720293
	I1124 14:20:42.054346  204698 config.go:182] Loaded profile config "embed-certs-720293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:20:42.054873  204698 cli_runner.go:164] Run: docker container inspect embed-certs-720293 --format={{.State.Status}}
	I1124 14:20:42.099187  204698 host.go:66] Checking if "embed-certs-720293" exists ...
	I1124 14:20:42.099557  204698 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:20:42.220398  204698 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 14:20:42.201525191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:20:42.221100  204698 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-720293 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 14:20:42.225468  204698 out.go:179] * Pausing node embed-certs-720293 ... 
	I1124 14:20:42.228419  204698 host.go:66] Checking if "embed-certs-720293" exists ...
	I1124 14:20:42.228953  204698 ssh_runner.go:195] Run: systemctl --version
	I1124 14:20:42.229022  204698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-720293
	I1124 14:20:42.273376  204698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/embed-certs-720293/id_rsa Username:docker}
	I1124 14:20:42.390651  204698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:20:42.420188  204698 pause.go:52] kubelet running: true
	I1124 14:20:42.420253  204698 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:20:42.734625  204698 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:20:42.734721  204698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:20:42.829658  204698 cri.go:89] found id: "467261f4d2571726e5a5f78ed70bec6f37018a976a06002a0150522a12c9e447"
	I1124 14:20:42.829719  204698 cri.go:89] found id: "42aec503a0da6a36b971d1c7b96c464efde9b44c146277076e103b249a49c5de"
	I1124 14:20:42.829741  204698 cri.go:89] found id: "00a43be63f0ee0c7c3caa8dd1d91a6db23be6515f0e5612d27abb6cdc903cf4b"
	I1124 14:20:42.829764  204698 cri.go:89] found id: "39826316da8861c6c390b371b0dfee6fb9b2d796fc941ea2368f1621b3599610"
	I1124 14:20:42.829789  204698 cri.go:89] found id: "cd3870847e89f7ad9748689f39400529da3e34ea80bd2e9c5d50b94014870174"
	I1124 14:20:42.829815  204698 cri.go:89] found id: "7a20914603732648c5d9ff34200e808b2002ae00dc4000fe37adb370011a3888"
	I1124 14:20:42.829842  204698 cri.go:89] found id: "7634741324dd1d91cc93df52ab62f4e54882e2826f3185dee5ff5c38bdffd3cf"
	I1124 14:20:42.829865  204698 cri.go:89] found id: "43d901c75e4d3ea7cfdd826b2f38e870e2be39de21570400fd187f7a2239344b"
	I1124 14:20:42.829890  204698 cri.go:89] found id: "da3f0798a706df28b161fc15c24ff964503411fba4af93d09ab0786003dc32ea"
	I1124 14:20:42.829927  204698 cri.go:89] found id: "f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2"
	I1124 14:20:42.829955  204698 cri.go:89] found id: "253bcd0f532f66e0e5b2fc4a4c88d5958c2a131d6a5a69048a5a2749195b6547"
	I1124 14:20:42.829974  204698 cri.go:89] found id: ""
	I1124 14:20:42.830041  204698 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:20:42.844095  204698 retry.go:31] will retry after 294.545148ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:20:42Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:20:43.139446  204698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:20:43.163247  204698 pause.go:52] kubelet running: false
	I1124 14:20:43.163305  204698 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:20:43.505920  204698 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:20:43.506002  204698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:20:43.633729  204698 cri.go:89] found id: "467261f4d2571726e5a5f78ed70bec6f37018a976a06002a0150522a12c9e447"
	I1124 14:20:43.633754  204698 cri.go:89] found id: "42aec503a0da6a36b971d1c7b96c464efde9b44c146277076e103b249a49c5de"
	I1124 14:20:43.633760  204698 cri.go:89] found id: "00a43be63f0ee0c7c3caa8dd1d91a6db23be6515f0e5612d27abb6cdc903cf4b"
	I1124 14:20:43.633763  204698 cri.go:89] found id: "39826316da8861c6c390b371b0dfee6fb9b2d796fc941ea2368f1621b3599610"
	I1124 14:20:43.633766  204698 cri.go:89] found id: "cd3870847e89f7ad9748689f39400529da3e34ea80bd2e9c5d50b94014870174"
	I1124 14:20:43.633770  204698 cri.go:89] found id: "7a20914603732648c5d9ff34200e808b2002ae00dc4000fe37adb370011a3888"
	I1124 14:20:43.633773  204698 cri.go:89] found id: "7634741324dd1d91cc93df52ab62f4e54882e2826f3185dee5ff5c38bdffd3cf"
	I1124 14:20:43.633776  204698 cri.go:89] found id: "43d901c75e4d3ea7cfdd826b2f38e870e2be39de21570400fd187f7a2239344b"
	I1124 14:20:43.633779  204698 cri.go:89] found id: "da3f0798a706df28b161fc15c24ff964503411fba4af93d09ab0786003dc32ea"
	I1124 14:20:43.633785  204698 cri.go:89] found id: "f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2"
	I1124 14:20:43.633789  204698 cri.go:89] found id: "253bcd0f532f66e0e5b2fc4a4c88d5958c2a131d6a5a69048a5a2749195b6547"
	I1124 14:20:43.633792  204698 cri.go:89] found id: ""
	I1124 14:20:43.633842  204698 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:20:43.649746  204698 retry.go:31] will retry after 396.114578ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:20:43Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:20:44.046295  204698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:20:44.062388  204698 pause.go:52] kubelet running: false
	I1124 14:20:44.062472  204698 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:20:44.365109  204698 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:20:44.365199  204698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:20:44.484473  204698 cri.go:89] found id: "467261f4d2571726e5a5f78ed70bec6f37018a976a06002a0150522a12c9e447"
	I1124 14:20:44.484492  204698 cri.go:89] found id: "42aec503a0da6a36b971d1c7b96c464efde9b44c146277076e103b249a49c5de"
	I1124 14:20:44.484497  204698 cri.go:89] found id: "00a43be63f0ee0c7c3caa8dd1d91a6db23be6515f0e5612d27abb6cdc903cf4b"
	I1124 14:20:44.484500  204698 cri.go:89] found id: "39826316da8861c6c390b371b0dfee6fb9b2d796fc941ea2368f1621b3599610"
	I1124 14:20:44.484503  204698 cri.go:89] found id: "cd3870847e89f7ad9748689f39400529da3e34ea80bd2e9c5d50b94014870174"
	I1124 14:20:44.484506  204698 cri.go:89] found id: "7a20914603732648c5d9ff34200e808b2002ae00dc4000fe37adb370011a3888"
	I1124 14:20:44.484509  204698 cri.go:89] found id: "7634741324dd1d91cc93df52ab62f4e54882e2826f3185dee5ff5c38bdffd3cf"
	I1124 14:20:44.484512  204698 cri.go:89] found id: "43d901c75e4d3ea7cfdd826b2f38e870e2be39de21570400fd187f7a2239344b"
	I1124 14:20:44.484515  204698 cri.go:89] found id: "da3f0798a706df28b161fc15c24ff964503411fba4af93d09ab0786003dc32ea"
	I1124 14:20:44.484521  204698 cri.go:89] found id: "f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2"
	I1124 14:20:44.484524  204698 cri.go:89] found id: "253bcd0f532f66e0e5b2fc4a4c88d5958c2a131d6a5a69048a5a2749195b6547"
	I1124 14:20:44.484527  204698 cri.go:89] found id: ""
	I1124 14:20:44.484595  204698 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:20:44.502459  204698 out.go:203] 
	W1124 14:20:44.505612  204698 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:20:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:20:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 14:20:44.505635  204698 out.go:285] * 
	* 
	W1124 14:20:44.511234  204698 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 14:20:44.513287  204698 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-720293 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-720293
helpers_test.go:243: (dbg) docker inspect embed-certs-720293:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b",
	        "Created": "2025-11-24T14:17:44.795163657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 198952,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:19:32.824615727Z",
	            "FinishedAt": "2025-11-24T14:19:31.995501223Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/hostname",
	        "HostsPath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/hosts",
	        "LogPath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b-json.log",
	        "Name": "/embed-certs-720293",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-720293:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-720293",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b",
	                "LowerDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-720293",
	                "Source": "/var/lib/docker/volumes/embed-certs-720293/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-720293",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-720293",
	                "name.minikube.sigs.k8s.io": "embed-certs-720293",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b8f0fded6a4ad6c1295833399fbeda46e1a1b7d88b11e35f03ef7e574a6475a",
	            "SandboxKey": "/var/run/docker/netns/5b8f0fded6a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-720293": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:5d:d8:4d:25:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c89ad55a017b9d150fec1f0d910c923b1dbfb234d3a49fcfd228e2952fc9581",
	                    "EndpointID": "4b3ca5a0a9dbf74d8b462d663ee82b33e7bf1a726944be4c3fa2c1d40f071c91",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-720293",
	                        "70d00db6e782"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-720293 -n embed-certs-720293
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-720293 -n embed-certs-720293: exit status 2 (499.203528ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-720293 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-720293 logs -n 25: (1.554193726s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:16 UTC │
	│ image   │ old-k8s-version-706771 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │ 24 Nov 25 14:16 UTC │
	│ pause   │ -p old-k8s-version-706771 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │                     │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-032076       │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:18 UTC │
	│ delete  │ -p cert-expiration-032076                                                                                                                                                                                                                     │ cert-expiration-032076       │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-444317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │                     │
	│ stop    │ -p no-preload-444317 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ addons  │ enable dashboard -p no-preload-444317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ stop    │ -p embed-certs-720293 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-720293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:20 UTC │
	│ image   │ no-preload-444317 image list --format=json                                                                                                                                                                                                    │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ pause   │ -p no-preload-444317 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p disable-driver-mounts-799392                                                                                                                                                                                                               │ disable-driver-mounts-799392 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │                     │
	│ image   │ embed-certs-720293 image list --format=json                                                                                                                                                                                                   │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ pause   │ -p embed-certs-720293 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:20:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:20:04.198524  202335 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:20:04.198976  202335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:20:04.198986  202335 out.go:374] Setting ErrFile to fd 2...
	I1124 14:20:04.198991  202335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:20:04.199262  202335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:20:04.199735  202335 out.go:368] Setting JSON to false
	I1124 14:20:04.200705  202335 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7356,"bootTime":1763986649,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:20:04.200769  202335 start.go:143] virtualization:  
	I1124 14:20:04.205201  202335 out.go:179] * [default-k8s-diff-port-152851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:20:04.209580  202335 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:20:04.209828  202335 notify.go:221] Checking for updates...
	I1124 14:20:04.218450  202335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:20:04.221692  202335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:20:04.224881  202335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:20:04.228173  202335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:20:04.231299  202335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:20:04.234802  202335 config.go:182] Loaded profile config "embed-certs-720293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:20:04.234965  202335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:20:04.282871  202335 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:20:04.283110  202335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:20:04.392646  202335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:20:04.381964454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:20:04.392760  202335 docker.go:319] overlay module found
	I1124 14:20:04.396815  202335 out.go:179] * Using the docker driver based on user configuration
	I1124 14:20:04.399891  202335 start.go:309] selected driver: docker
	I1124 14:20:04.399923  202335 start.go:927] validating driver "docker" against <nil>
	I1124 14:20:04.399938  202335 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:20:04.400698  202335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:20:04.500553  202335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:20:04.4908985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:20:04.500798  202335 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:20:04.501155  202335 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:20:04.507862  202335 out.go:179] * Using Docker driver with root privileges
	I1124 14:20:04.510891  202335 cni.go:84] Creating CNI manager for ""
	I1124 14:20:04.510970  202335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:20:04.510990  202335 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:20:04.511075  202335 start.go:353] cluster config:
	{Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:20:04.514628  202335 out.go:179] * Starting "default-k8s-diff-port-152851" primary control-plane node in "default-k8s-diff-port-152851" cluster
	I1124 14:20:04.517789  202335 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:20:04.521016  202335 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:20:04.523870  202335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:20:04.523923  202335 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:20:04.523933  202335 cache.go:65] Caching tarball of preloaded images
	I1124 14:20:04.524027  202335 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:20:04.524042  202335 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:20:04.524150  202335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/config.json ...
	I1124 14:20:04.524181  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/config.json: {Name:mk81282ee6baabf5ef7a33c2dea2ac9e00f5abf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:04.524334  202335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:20:04.553574  202335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:20:04.553597  202335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:20:04.553618  202335 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:20:04.553649  202335 start.go:360] acquireMachinesLock for default-k8s-diff-port-152851: {Name:mke46aeaf15b4ddafe579277f6642f055937b3b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:20:04.553784  202335 start.go:364] duration metric: took 107.021µs to acquireMachinesLock for "default-k8s-diff-port-152851"
	I1124 14:20:04.553817  202335 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:20:04.553965  202335 start.go:125] createHost starting for "" (driver="docker")
	W1124 14:20:02.670651  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:04.672690  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:07.166252  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:04.557666  202335 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:20:04.557962  202335 start.go:159] libmachine.API.Create for "default-k8s-diff-port-152851" (driver="docker")
	I1124 14:20:04.558000  202335 client.go:173] LocalClient.Create starting
	I1124 14:20:04.558095  202335 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem
	I1124 14:20:04.558274  202335 main.go:143] libmachine: Decoding PEM data...
	I1124 14:20:04.558298  202335 main.go:143] libmachine: Parsing certificate...
	I1124 14:20:04.558377  202335 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem
	I1124 14:20:04.558407  202335 main.go:143] libmachine: Decoding PEM data...
	I1124 14:20:04.558438  202335 main.go:143] libmachine: Parsing certificate...
	I1124 14:20:04.558828  202335 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-152851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:20:04.583684  202335 cli_runner.go:211] docker network inspect default-k8s-diff-port-152851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:20:04.583770  202335 network_create.go:284] running [docker network inspect default-k8s-diff-port-152851] to gather additional debugging logs...
	I1124 14:20:04.583786  202335 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-152851
	W1124 14:20:04.602664  202335 cli_runner.go:211] docker network inspect default-k8s-diff-port-152851 returned with exit code 1
	I1124 14:20:04.602697  202335 network_create.go:287] error running [docker network inspect default-k8s-diff-port-152851]: docker network inspect default-k8s-diff-port-152851: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-152851 not found
	I1124 14:20:04.602713  202335 network_create.go:289] output of [docker network inspect default-k8s-diff-port-152851]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-152851 not found
	
	** /stderr **
	I1124 14:20:04.602815  202335 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:20:04.634619  202335 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3087ee9f269 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:07:60:94:e6:54} reservation:<nil>}
	I1124 14:20:04.634970  202335 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-87dca5a19352 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:6c:c1:85:45:94} reservation:<nil>}
	I1124 14:20:04.635512  202335 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9e995bd1b79e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:f1:73:f5:6f:cf} reservation:<nil>}
	I1124 14:20:04.635919  202335 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a40120}
	I1124 14:20:04.635936  202335 network_create.go:124] attempt to create docker network default-k8s-diff-port-152851 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:20:04.635989  202335 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-152851 default-k8s-diff-port-152851
	I1124 14:20:04.707041  202335 network_create.go:108] docker network default-k8s-diff-port-152851 192.168.76.0/24 created
	I1124 14:20:04.707090  202335 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-152851" container
	I1124 14:20:04.707163  202335 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:20:04.735419  202335 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-152851 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-152851 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:20:04.766257  202335 oci.go:103] Successfully created a docker volume default-k8s-diff-port-152851
	I1124 14:20:04.766348  202335 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-152851-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-152851 --entrypoint /usr/bin/test -v default-k8s-diff-port-152851:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:20:05.654315  202335 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-152851
	I1124 14:20:05.654377  202335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:20:05.654386  202335 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:20:05.654457  202335 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-152851:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1124 14:20:09.166849  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:11.168930  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:10.214747  202335 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-152851:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.560254044s)
	I1124 14:20:10.214781  202335 kic.go:203] duration metric: took 4.560391417s to extract preloaded images to volume ...
	W1124 14:20:10.214927  202335 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 14:20:10.215034  202335 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:20:10.285849  202335 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-152851 --name default-k8s-diff-port-152851 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-152851 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-152851 --network default-k8s-diff-port-152851 --ip 192.168.76.2 --volume default-k8s-diff-port-152851:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:20:10.594232  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Running}}
	I1124 14:20:10.611266  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:10.639225  202335 cli_runner.go:164] Run: docker exec default-k8s-diff-port-152851 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:20:10.696366  202335 oci.go:144] the created container "default-k8s-diff-port-152851" has a running status.
	I1124 14:20:10.696400  202335 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa...
	I1124 14:20:11.119690  202335 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:20:11.146515  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:11.168183  202335 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:20:11.168207  202335 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-152851 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:20:11.213778  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:11.231702  202335 machine.go:94] provisionDockerMachine start ...
	I1124 14:20:11.231799  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:11.249363  202335 main.go:143] libmachine: Using SSH client type: native
	I1124 14:20:11.249713  202335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 14:20:11.249727  202335 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:20:11.250305  202335 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48962->127.0.0.1:33078: read: connection reset by peer
	I1124 14:20:14.403266  202335 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-152851
	
	I1124 14:20:14.403289  202335 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-152851"
	I1124 14:20:14.403425  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:14.421245  202335 main.go:143] libmachine: Using SSH client type: native
	I1124 14:20:14.421644  202335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 14:20:14.421665  202335 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-152851 && echo "default-k8s-diff-port-152851" | sudo tee /etc/hostname
	I1124 14:20:14.589533  202335 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-152851
	
	I1124 14:20:14.589613  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:14.609605  202335 main.go:143] libmachine: Using SSH client type: native
	I1124 14:20:14.610187  202335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 14:20:14.610211  202335 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-152851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-152851/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-152851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:20:14.763857  202335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:20:14.763967  202335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:20:14.764043  202335 ubuntu.go:190] setting up certificates
	I1124 14:20:14.764084  202335 provision.go:84] configureAuth start
	I1124 14:20:14.764209  202335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:20:14.782935  202335 provision.go:143] copyHostCerts
	I1124 14:20:14.783016  202335 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:20:14.783025  202335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:20:14.783116  202335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:20:14.783238  202335 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:20:14.783244  202335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:20:14.783282  202335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:20:14.783418  202335 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:20:14.783426  202335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:20:14.783463  202335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:20:14.783557  202335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-152851 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-152851 localhost minikube]
	I1124 14:20:15.087707  202335 provision.go:177] copyRemoteCerts
	I1124 14:20:15.087813  202335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:20:15.087883  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:15.105542  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:15.211038  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:20:15.229178  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 14:20:15.247466  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:20:15.265068  202335 provision.go:87] duration metric: took 500.947008ms to configureAuth
	I1124 14:20:15.265099  202335 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:20:15.265293  202335 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:20:15.265406  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:15.282436  202335 main.go:143] libmachine: Using SSH client type: native
	I1124 14:20:15.282756  202335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 14:20:15.282769  202335 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:20:15.670282  202335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:20:15.670306  202335 machine.go:97] duration metric: took 4.438580402s to provisionDockerMachine
	I1124 14:20:15.670317  202335 client.go:176] duration metric: took 11.112309664s to LocalClient.Create
	I1124 14:20:15.670332  202335 start.go:167] duration metric: took 11.112371178s to libmachine.API.Create "default-k8s-diff-port-152851"
	I1124 14:20:15.670352  202335 start.go:293] postStartSetup for "default-k8s-diff-port-152851" (driver="docker")
	I1124 14:20:15.670366  202335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:20:15.670438  202335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:20:15.670486  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:15.687858  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:15.795621  202335 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:20:15.798925  202335 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:20:15.798954  202335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:20:15.798966  202335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:20:15.799021  202335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:20:15.799098  202335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:20:15.799200  202335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:20:15.806825  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:20:15.824887  202335 start.go:296] duration metric: took 154.516454ms for postStartSetup
	I1124 14:20:15.825270  202335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:20:15.842546  202335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/config.json ...
	I1124 14:20:15.842841  202335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:20:15.842894  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:15.864860  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:15.972921  202335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:20:15.981464  202335 start.go:128] duration metric: took 11.427482026s to createHost
	I1124 14:20:15.981488  202335 start.go:83] releasing machines lock for "default-k8s-diff-port-152851", held for 11.427690218s
	I1124 14:20:15.981562  202335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:20:15.999018  202335 ssh_runner.go:195] Run: cat /version.json
	I1124 14:20:15.999074  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:15.999295  202335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:20:15.999427  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:16.025507  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:16.038110  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:16.131726  202335 ssh_runner.go:195] Run: systemctl --version
	I1124 14:20:16.227734  202335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:20:16.272290  202335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:20:16.276779  202335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:20:16.276860  202335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:20:16.305624  202335 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:20:16.305649  202335 start.go:496] detecting cgroup driver to use...
	I1124 14:20:16.305681  202335 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:20:16.305741  202335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:20:16.324594  202335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:20:16.337355  202335 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:20:16.337441  202335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:20:16.355152  202335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:20:16.375011  202335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:20:16.501048  202335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:20:16.639974  202335 docker.go:234] disabling docker service ...
	I1124 14:20:16.640092  202335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:20:16.661507  202335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:20:16.678764  202335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:20:16.809163  202335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:20:16.939393  202335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:20:16.953524  202335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:20:16.968120  202335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:20:16.968215  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:16.978073  202335 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:20:16.978156  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:16.987659  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:16.997483  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:17.008824  202335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:20:17.018192  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:17.027749  202335 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:17.042176  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:17.051563  202335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:20:17.059572  202335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:20:17.067106  202335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:20:17.190205  202335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:20:17.367422  202335 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:20:17.367544  202335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:20:17.372313  202335 start.go:564] Will wait 60s for crictl version
	I1124 14:20:17.372419  202335 ssh_runner.go:195] Run: which crictl
	I1124 14:20:17.379641  202335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:20:17.427228  202335 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:20:17.427342  202335 ssh_runner.go:195] Run: crio --version
	I1124 14:20:17.469021  202335 ssh_runner.go:195] Run: crio --version
	I1124 14:20:17.505376  202335 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1124 14:20:13.667196  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:16.168503  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:17.508284  202335 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-152851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:20:17.525499  202335 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:20:17.529738  202335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:20:17.541127  202335 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:20:17.541255  202335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:20:17.541313  202335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:20:17.583559  202335 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:20:17.583585  202335 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:20:17.583642  202335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:20:17.611510  202335 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:20:17.611534  202335 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:20:17.611543  202335 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 14:20:17.611665  202335 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-152851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:20:17.611748  202335 ssh_runner.go:195] Run: crio config
	I1124 14:20:17.676085  202335 cni.go:84] Creating CNI manager for ""
	I1124 14:20:17.676158  202335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:20:17.676187  202335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:20:17.676240  202335 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-152851 NodeName:default-k8s-diff-port-152851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:20:17.676408  202335 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-152851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:20:17.676514  202335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:20:17.684937  202335 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:20:17.685051  202335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:20:17.692619  202335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 14:20:17.706039  202335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:20:17.718987  202335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1124 14:20:17.736136  202335 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:20:17.739715  202335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:20:17.749652  202335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:20:17.873752  202335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:20:17.898314  202335 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851 for IP: 192.168.76.2
	I1124 14:20:17.898382  202335 certs.go:195] generating shared ca certs ...
	I1124 14:20:17.898413  202335 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:17.898623  202335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:20:17.898692  202335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:20:17.898727  202335 certs.go:257] generating profile certs ...
	I1124 14:20:17.898803  202335 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.key
	I1124 14:20:17.898842  202335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt with IP's: []
	I1124 14:20:19.076842  202335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt ...
	I1124 14:20:19.076926  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: {Name:mk866d39bdecb66e85282d9e13c4f03f10d86194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.077153  202335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.key ...
	I1124 14:20:19.077170  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.key: {Name:mkc00caae046e2cc48f44b7b7aac04ba6a21dd99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.077296  202335 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key.ec9a3231
	I1124 14:20:19.077318  202335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt.ec9a3231 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 14:20:19.512402  202335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt.ec9a3231 ...
	I1124 14:20:19.512433  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt.ec9a3231: {Name:mk1d05d9f03c5191ef9c06f9dd3bd1f5350ffbad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.512629  202335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key.ec9a3231 ...
	I1124 14:20:19.512646  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key.ec9a3231: {Name:mk058e0ece22697ffefcd3af3bfdb811e8504c78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.512728  202335 certs.go:382] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt.ec9a3231 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt
	I1124 14:20:19.512813  202335 certs.go:386] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key.ec9a3231 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key
	I1124 14:20:19.512893  202335 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key
	I1124 14:20:19.512914  202335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.crt with IP's: []
	I1124 14:20:19.745759  202335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.crt ...
	I1124 14:20:19.745788  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.crt: {Name:mk30be739c9f6f3ed1e613e63c6f7abce6dcd053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.745969  202335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key ...
	I1124 14:20:19.745984  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key: {Name:mk6dff98d9974ded0b0a41ba88fda39d136b1e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.746207  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:20:19.746255  202335 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:20:19.746271  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:20:19.746298  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:20:19.746328  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:20:19.746357  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:20:19.746406  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:20:19.747056  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:20:19.765341  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:20:19.784048  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:20:19.803039  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:20:19.822916  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 14:20:19.842479  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:20:19.861445  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:20:19.880399  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:20:19.903142  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:20:19.923840  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:20:19.950883  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:20:19.972825  202335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:20:19.987996  202335 ssh_runner.go:195] Run: openssl version
	I1124 14:20:19.994941  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:20:20.013822  202335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:20:20.019615  202335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:20:20.019692  202335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:20:20.068389  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:20:20.077977  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:20:20.088160  202335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:20:20.093054  202335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:20:20.093131  202335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:20:20.140641  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:20:20.149628  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:20:20.160237  202335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:20:20.165466  202335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:20:20.165550  202335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:20:20.207861  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:20:20.216685  202335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:20:20.221354  202335 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:20:20.221408  202335 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:20:20.221484  202335 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:20:20.221557  202335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:20:20.260072  202335 cri.go:89] found id: ""
	I1124 14:20:20.260139  202335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:20:20.273893  202335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:20:20.287746  202335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:20:20.287816  202335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:20:20.296116  202335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:20:20.296134  202335 kubeadm.go:158] found existing configuration files:
	
	I1124 14:20:20.296188  202335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 14:20:20.305332  202335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:20:20.305439  202335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:20:20.313623  202335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 14:20:20.321800  202335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:20:20.321915  202335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:20:20.329659  202335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 14:20:20.337839  202335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:20:20.337937  202335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:20:20.345901  202335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 14:20:20.354347  202335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:20:20.354444  202335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:20:20.362305  202335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:20:20.413368  202335 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:20:20.413595  202335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:20:20.440188  202335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:20:20.440266  202335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:20:20.440307  202335 kubeadm.go:319] OS: Linux
	I1124 14:20:20.440356  202335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:20:20.440410  202335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:20:20.440463  202335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:20:20.440521  202335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:20:20.440583  202335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:20:20.440640  202335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:20:20.440691  202335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:20:20.440744  202335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:20:20.440794  202335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:20:20.514418  202335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:20:20.514539  202335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:20:20.514638  202335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:20:20.523871  202335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1124 14:20:18.670755  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:21.170616  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:20.528169  202335 out.go:252]   - Generating certificates and keys ...
	I1124 14:20:20.528333  202335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:20:20.528449  202335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:20:21.284082  202335 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:20:21.744823  202335 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:20:22.641608  202335 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:20:22.861618  202335 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:20:23.289517  202335 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:20:23.289699  202335 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-152851 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:20:23.583104  202335 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:20:23.583463  202335 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-152851 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1124 14:20:23.666970  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:26.168532  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:24.508728  202335 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:20:25.604277  202335 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:20:26.093530  202335 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:20:26.093810  202335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:20:27.214557  202335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:20:27.370550  202335 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:20:27.491910  202335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:20:28.164654  202335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:20:28.378543  202335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:20:28.379163  202335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:20:28.382943  202335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:20:28.386160  202335 out.go:252]   - Booting up control plane ...
	I1124 14:20:28.386265  202335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:20:28.386349  202335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:20:28.387194  202335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:20:28.440098  202335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:20:28.440206  202335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:20:28.451707  202335 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:20:28.452072  202335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:20:28.452296  202335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:20:28.594949  202335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:20:28.595070  202335 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1124 14:20:28.169115  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:28.669170  198824 pod_ready.go:94] pod "coredns-66bc5c9577-6nztq" is "Ready"
	I1124 14:20:28.669199  198824 pod_ready.go:86] duration metric: took 41.008302381s for pod "coredns-66bc5c9577-6nztq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.672683  198824 pod_ready.go:83] waiting for pod "etcd-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.679991  198824 pod_ready.go:94] pod "etcd-embed-certs-720293" is "Ready"
	I1124 14:20:28.680019  198824 pod_ready.go:86] duration metric: took 7.309484ms for pod "etcd-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.682843  198824 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.688281  198824 pod_ready.go:94] pod "kube-apiserver-embed-certs-720293" is "Ready"
	I1124 14:20:28.688305  198824 pod_ready.go:86] duration metric: took 5.437235ms for pod "kube-apiserver-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.690924  198824 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.865493  198824 pod_ready.go:94] pod "kube-controller-manager-embed-certs-720293" is "Ready"
	I1124 14:20:28.865525  198824 pod_ready.go:86] duration metric: took 174.576542ms for pod "kube-controller-manager-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:29.065560  198824 pod_ready.go:83] waiting for pod "kube-proxy-pwpl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:29.465517  198824 pod_ready.go:94] pod "kube-proxy-pwpl4" is "Ready"
	I1124 14:20:29.465542  198824 pod_ready.go:86] duration metric: took 399.904007ms for pod "kube-proxy-pwpl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:29.665874  198824 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:30.072055  198824 pod_ready.go:94] pod "kube-scheduler-embed-certs-720293" is "Ready"
	I1124 14:20:30.072084  198824 pod_ready.go:86] duration metric: took 406.18527ms for pod "kube-scheduler-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:30.072097  198824 pod_ready.go:40] duration metric: took 42.415483801s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:20:30.161499  198824 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:20:30.164621  198824 out.go:179] * Done! kubectl is now configured to use "embed-certs-720293" cluster and "default" namespace by default
	I1124 14:20:30.595418  202335 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000837956s
	I1124 14:20:30.599210  202335 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:20:30.599692  202335 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1124 14:20:30.600013  202335 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:20:30.600813  202335 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:20:33.733116  202335 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.131881343s
	I1124 14:20:35.607201  202335 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.00572933s
	I1124 14:20:37.102044  202335 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501587155s
	I1124 14:20:37.122839  202335 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:20:37.139639  202335 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:20:37.155808  202335 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:20:37.156021  202335 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-152851 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:20:37.169343  202335 kubeadm.go:319] [bootstrap-token] Using token: 1r0ssm.nhq1upa2er09iuov
	I1124 14:20:37.172446  202335 out.go:252]   - Configuring RBAC rules ...
	I1124 14:20:37.172599  202335 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:20:37.178899  202335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:20:37.189849  202335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:20:37.194492  202335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:20:37.201072  202335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:20:37.205644  202335 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:20:37.517471  202335 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:20:37.955683  202335 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:20:38.509015  202335 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:20:38.511106  202335 kubeadm.go:319] 
	I1124 14:20:38.511182  202335 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:20:38.511188  202335 kubeadm.go:319] 
	I1124 14:20:38.511265  202335 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:20:38.511276  202335 kubeadm.go:319] 
	I1124 14:20:38.511302  202335 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:20:38.513944  202335 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:20:38.514031  202335 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:20:38.514046  202335 kubeadm.go:319] 
	I1124 14:20:38.514102  202335 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:20:38.514106  202335 kubeadm.go:319] 
	I1124 14:20:38.514166  202335 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:20:38.514187  202335 kubeadm.go:319] 
	I1124 14:20:38.514263  202335 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:20:38.514347  202335 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:20:38.514422  202335 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:20:38.514426  202335 kubeadm.go:319] 
	I1124 14:20:38.514518  202335 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:20:38.514602  202335 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:20:38.514611  202335 kubeadm.go:319] 
	I1124 14:20:38.514719  202335 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 1r0ssm.nhq1upa2er09iuov \
	I1124 14:20:38.514851  202335 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f \
	I1124 14:20:38.514880  202335 kubeadm.go:319] 	--control-plane 
	I1124 14:20:38.514887  202335 kubeadm.go:319] 
	I1124 14:20:38.514972  202335 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:20:38.514978  202335 kubeadm.go:319] 
	I1124 14:20:38.515056  202335 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 1r0ssm.nhq1upa2er09iuov \
	I1124 14:20:38.515160  202335 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f 
	I1124 14:20:38.520929  202335 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:20:38.521160  202335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:20:38.521271  202335 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:20:38.521292  202335 cni.go:84] Creating CNI manager for ""
	I1124 14:20:38.521304  202335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:20:38.526552  202335 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:20:38.529412  202335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:20:38.533787  202335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:20:38.533808  202335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:20:38.551778  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:20:38.857833  202335 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:20:38.858000  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:38.858070  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-152851 minikube.k8s.io/updated_at=2025_11_24T14_20_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=default-k8s-diff-port-152851 minikube.k8s.io/primary=true
	I1124 14:20:39.034165  202335 ops.go:34] apiserver oom_adj: -16
	I1124 14:20:39.034276  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:39.534698  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:40.034463  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:40.534556  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:41.035427  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:41.534422  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:42.034686  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:42.534391  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:43.034342  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:43.150488  202335 kubeadm.go:1114] duration metric: took 4.292549445s to wait for elevateKubeSystemPrivileges
	I1124 14:20:43.150516  202335 kubeadm.go:403] duration metric: took 22.929111223s to StartCluster
	I1124 14:20:43.150533  202335 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:43.150598  202335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:20:43.152144  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:43.152393  202335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:20:43.152633  202335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:20:43.152951  202335 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:20:43.152989  202335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:20:43.153047  202335 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-152851"
	I1124 14:20:43.153064  202335 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-152851"
	I1124 14:20:43.153086  202335 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:20:43.153593  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:43.153859  202335 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-152851"
	I1124 14:20:43.153877  202335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-152851"
	I1124 14:20:43.154145  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:43.157463  202335 out.go:179] * Verifying Kubernetes components...
	I1124 14:20:43.161249  202335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:20:43.194174  202335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:20:43.198436  202335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:20:43.198461  202335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:20:43.198524  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:43.200325  202335 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-152851"
	I1124 14:20:43.200367  202335 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:20:43.200836  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:43.241498  202335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:20:43.241521  202335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:20:43.241582  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:43.245598  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:43.279601  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:43.717447  202335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:20:43.733813  202335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:20:43.830734  202335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:20:43.830873  202335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:20:45.128194  202335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.394339727s)
	I1124 14:20:45.128424  202335 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.297530715s)
	I1124 14:20:45.129665  202335 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-152851" to be "Ready" ...
	I1124 14:20:45.129957  202335 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.299180505s)
	I1124 14:20:45.129975  202335 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 14:20:45.133649  202335 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.818988801Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.831652631Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.831822552Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.831931493Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.837507535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.837676784Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.837754783Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.841301774Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.841489452Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.841605498Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.844885122Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.845033645Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.066831244Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6913cd53-04bb-4297-8f73-6b24c5157bd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.068731612Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9c499223-7794-4e7b-a3ef-d17d8dea661b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.070019347Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr/dashboard-metrics-scraper" id=3114f366-33be-4ee8-a6ff-d65da75abba6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.070157908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.087282256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.089341084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.107189419Z" level=info msg="Created container f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr/dashboard-metrics-scraper" id=3114f366-33be-4ee8-a6ff-d65da75abba6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.112781264Z" level=info msg="Starting container: f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2" id=11b97435-65b8-418c-8c48-d2da8a8a9be6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.117439741Z" level=info msg="Started container" PID=1688 containerID=f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr/dashboard-metrics-scraper id=11b97435-65b8-418c-8c48-d2da8a8a9be6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4b58f0630032bd62bbb8380ce0879fefa25fbe4c4be592c4771b858d88e057fb
	Nov 24 14:20:28 embed-certs-720293 conmon[1686]: conmon f4b8663a77e015998912 <ninfo>: container 1688 exited with status 1
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.420493421Z" level=info msg="Removing container: 4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5" id=1311d61f-3c12-43c9-a4b3-7d46b89a3ee2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.443627145Z" level=info msg="Error loading conmon cgroup of container 4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5: cgroup deleted" id=1311d61f-3c12-43c9-a4b3-7d46b89a3ee2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.450960932Z" level=info msg="Removed container 4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr/dashboard-metrics-scraper" id=1311d61f-3c12-43c9-a4b3-7d46b89a3ee2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f4b8663a77e01       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago       Exited              dashboard-metrics-scraper   2                   4b58f0630032b       dashboard-metrics-scraper-6ffb444bf9-2bdsr   kubernetes-dashboard
	467261f4d2571       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   50b77adcdb377       storage-provisioner                          kube-system
	253bcd0f532f6       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   656761d966e4f       kubernetes-dashboard-855c9754f9-7rfrv        kubernetes-dashboard
	42aec503a0da6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   c3c57dc5ba36e       coredns-66bc5c9577-6nztq                     kube-system
	783bd123b9780       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   70b1ac05fea3f       busybox                                      default
	00a43be63f0ee       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   af799f14d2ed4       kube-proxy-pwpl4                             kube-system
	39826316da886       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   50b77adcdb377       storage-provisioner                          kube-system
	cd3870847e89f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   834410470ac72       kindnet-ft88w                                kube-system
	7a20914603732       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6fa44c90844da       kube-controller-manager-embed-certs-720293   kube-system
	7634741324dd1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   24b07d8445ab9       kube-scheduler-embed-certs-720293            kube-system
	43d901c75e4d3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   760bd3b69d313       etcd-embed-certs-720293                      kube-system
	da3f0798a706d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   2f2b000532175       kube-apiserver-embed-certs-720293            kube-system
	
	
	==> coredns [42aec503a0da6a36b971d1c7b96c464efde9b44c146277076e103b249a49c5de] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50117 - 31578 "HINFO IN 3175689223186040760.3352707592823412203. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034811987s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-720293
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-720293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=embed-certs-720293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_18_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:18:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-720293
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:20:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:20:16 +0000   Mon, 24 Nov 2025 14:18:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:20:16 +0000   Mon, 24 Nov 2025 14:18:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:20:16 +0000   Mon, 24 Nov 2025 14:18:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:20:16 +0000   Mon, 24 Nov 2025 14:19:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-720293
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                f982cc7c-133c-414c-b480-dd4b30e870c6
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-6nztq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m24s
	  kube-system                 etcd-embed-certs-720293                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-ft88w                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-embed-certs-720293             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-embed-certs-720293    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-pwpl4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-embed-certs-720293             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2bdsr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7rfrv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m22s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   Starting                 2m38s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node embed-certs-720293 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node embed-certs-720293 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x8 over 2m38s)  kubelet          Node embed-certs-720293 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m29s                  kubelet          Node embed-certs-720293 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m29s                  kubelet          Node embed-certs-720293 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m29s                  kubelet          Node embed-certs-720293 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m25s                  node-controller  Node embed-certs-720293 event: Registered Node embed-certs-720293 in Controller
	  Normal   NodeReady                103s                   kubelet          Node embed-certs-720293 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node embed-certs-720293 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node embed-certs-720293 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node embed-certs-720293 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node embed-certs-720293 event: Registered Node embed-certs-720293 in Controller
	
	
	==> dmesg <==
	[Nov24 13:56] overlayfs: idmapped layers are currently not supported
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	[ +49.916713] overlayfs: idmapped layers are currently not supported
	[Nov24 14:19] overlayfs: idmapped layers are currently not supported
	[Nov24 14:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [43d901c75e4d3ea7cfdd826b2f38e870e2be39de21570400fd187f7a2239344b] <==
	{"level":"warn","ts":"2025-11-24T14:19:43.805983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:43.883984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:43.894997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:43.933223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:43.965539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:43.976987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.006753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.034569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.066906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.104286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.132263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.189674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.224792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.245726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.271709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.289448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.307074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.336224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.353720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.371558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.389413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.424364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.442164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.461636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.526551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:20:46 up  2:03,  0 user,  load average: 3.44, 3.01, 2.55
	Linux embed-certs-720293 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cd3870847e89f7ad9748689f39400529da3e34ea80bd2e9c5d50b94014870174] <==
	I1124 14:19:46.620918       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:19:46.621408       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:19:46.621572       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:19:46.621613       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:19:46.621656       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:19:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:19:46.830970       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:19:46.831002       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:19:46.831026       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:19:46.831179       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:20:16.816224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:20:16.820870       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:20:16.821087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:20:16.821239       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:20:18.231650       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:20:18.231771       1 metrics.go:72] Registering metrics
	I1124 14:20:18.231874       1 controller.go:711] "Syncing nftables rules"
	I1124 14:20:26.818483       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:20:26.818676       1 main.go:301] handling current node
	I1124 14:20:36.815227       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:20:36.815442       1 main.go:301] handling current node
	
	
	==> kube-apiserver [da3f0798a706df28b161fc15c24ff964503411fba4af93d09ab0786003dc32ea] <==
	I1124 14:19:45.540801       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:19:45.545449       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:19:45.577324       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 14:19:45.577358       1 policy_source.go:240] refreshing policies
	I1124 14:19:45.579015       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:19:45.579070       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:19:45.579089       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 14:19:45.626736       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 14:19:45.626769       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 14:19:45.626962       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 14:19:45.631682       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:19:45.655884       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:19:45.675634       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1124 14:19:45.769252       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:19:46.069231       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:19:46.337126       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:19:46.510986       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:19:46.646107       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:19:46.772580       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:19:46.804272       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:19:46.945122       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.116.153"}
	I1124 14:19:46.983751       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.207.16"}
	I1124 14:19:49.099549       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:19:49.398040       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:19:49.451389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7a20914603732648c5d9ff34200e808b2002ae00dc4000fe37adb370011a3888] <==
	I1124 14:19:48.963166       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-720293"
	I1124 14:19:48.963212       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:19:48.963727       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:19:48.966425       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:19:48.969588       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:19:48.969718       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 14:19:48.970047       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:19:48.970118       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:19:48.971079       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:19:48.973396       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:19:48.973810       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 14:19:48.976014       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:19:48.976332       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:19:48.978577       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:19:48.980873       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:19:48.983801       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:19:48.990900       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:19:48.991009       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 14:19:48.992115       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:19:48.992186       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 14:19:48.992324       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:19:48.992340       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:19:48.992347       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:19:48.992469       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:19:49.021303       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [00a43be63f0ee0c7c3caa8dd1d91a6db23be6515f0e5612d27abb6cdc903cf4b] <==
	I1124 14:19:46.849346       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:19:47.086885       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:19:47.187594       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:19:47.187632       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:19:47.187704       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:19:47.220988       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:19:47.221045       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:19:47.227528       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:19:47.227948       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:19:47.227975       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:19:47.230528       1 config.go:200] "Starting service config controller"
	I1124 14:19:47.230551       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:19:47.230570       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:19:47.230575       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:19:47.230588       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:19:47.230593       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:19:47.234142       1 config.go:309] "Starting node config controller"
	I1124 14:19:47.234189       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:19:47.234202       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:19:47.330662       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:19:47.330675       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:19:47.330736       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7634741324dd1d91cc93df52ab62f4e54882e2826f3185dee5ff5c38bdffd3cf] <==
	I1124 14:19:43.081282       1 serving.go:386] Generated self-signed cert in-memory
	W1124 14:19:45.546451       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 14:19:45.546492       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 14:19:45.546502       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 14:19:45.546510       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 14:19:45.606719       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:19:45.621182       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:19:45.633327       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:19:45.634020       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:19:45.634079       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:19:45.647811       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:19:45.748503       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:19:46 embed-certs-720293 kubelet[783]: W1124 14:19:46.466418     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/crio-70b1ac05fea3f99efdbc12ddc3e17d202d6a6100ad046026db1d32036ca60995 WatchSource:0}: Error finding container 70b1ac05fea3f99efdbc12ddc3e17d202d6a6100ad046026db1d32036ca60995: Status 404 returned error can't find the container with id 70b1ac05fea3f99efdbc12ddc3e17d202d6a6100ad046026db1d32036ca60995
	Nov 24 14:19:46 embed-certs-720293 kubelet[783]: W1124 14:19:46.549501     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/crio-c3c57dc5ba36e850403c489378062a88fb81de4f608bc351d77f316149e3d441 WatchSource:0}: Error finding container c3c57dc5ba36e850403c489378062a88fb81de4f608bc351d77f316149e3d441: Status 404 returned error can't find the container with id c3c57dc5ba36e850403c489378062a88fb81de4f608bc351d77f316149e3d441
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: I1124 14:19:49.676538     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99bwc\" (UniqueName: \"kubernetes.io/projected/85c34e57-0c32-4608-9857-e57d504ed2d4-kube-api-access-99bwc\") pod \"dashboard-metrics-scraper-6ffb444bf9-2bdsr\" (UID: \"85c34e57-0c32-4608-9857-e57d504ed2d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr"
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: I1124 14:19:49.676602     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/85c34e57-0c32-4608-9857-e57d504ed2d4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2bdsr\" (UID: \"85c34e57-0c32-4608-9857-e57d504ed2d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr"
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: I1124 14:19:49.676628     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts8qc\" (UniqueName: \"kubernetes.io/projected/54479f7d-df5f-4bdb-9bf0-fffe91f3f263-kube-api-access-ts8qc\") pod \"kubernetes-dashboard-855c9754f9-7rfrv\" (UID: \"54479f7d-df5f-4bdb-9bf0-fffe91f3f263\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7rfrv"
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: I1124 14:19:49.676651     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/54479f7d-df5f-4bdb-9bf0-fffe91f3f263-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-7rfrv\" (UID: \"54479f7d-df5f-4bdb-9bf0-fffe91f3f263\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7rfrv"
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: W1124 14:19:49.929156     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/crio-656761d966e4fe27ff91bbf258cdec3239ee8b3f4546005eb1862ea6c00cea8a WatchSource:0}: Error finding container 656761d966e4fe27ff91bbf258cdec3239ee8b3f4546005eb1862ea6c00cea8a: Status 404 returned error can't find the container with id 656761d966e4fe27ff91bbf258cdec3239ee8b3f4546005eb1862ea6c00cea8a
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: W1124 14:19:49.951473     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/crio-4b58f0630032bd62bbb8380ce0879fefa25fbe4c4be592c4771b858d88e057fb WatchSource:0}: Error finding container 4b58f0630032bd62bbb8380ce0879fefa25fbe4c4be592c4771b858d88e057fb: Status 404 returned error can't find the container with id 4b58f0630032bd62bbb8380ce0879fefa25fbe4c4be592c4771b858d88e057fb
	Nov 24 14:20:00 embed-certs-720293 kubelet[783]: I1124 14:20:00.434327     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7rfrv" podStartSLOduration=1.474627137 podStartE2EDuration="11.434305677s" podCreationTimestamp="2025-11-24 14:19:49 +0000 UTC" firstStartedPulling="2025-11-24 14:19:49.934126072 +0000 UTC m=+10.103278256" lastFinishedPulling="2025-11-24 14:19:59.89380462 +0000 UTC m=+20.062956796" observedRunningTime="2025-11-24 14:20:00.43321614 +0000 UTC m=+20.602368315" watchObservedRunningTime="2025-11-24 14:20:00.434305677 +0000 UTC m=+20.603457968"
	Nov 24 14:20:07 embed-certs-720293 kubelet[783]: I1124 14:20:07.348455     783 scope.go:117] "RemoveContainer" containerID="17400581bf2ecd7f4c1669300644ca9f310f6dac63342b606cc38a4388a5ab6f"
	Nov 24 14:20:08 embed-certs-720293 kubelet[783]: I1124 14:20:08.352750     783 scope.go:117] "RemoveContainer" containerID="4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5"
	Nov 24 14:20:08 embed-certs-720293 kubelet[783]: E1124 14:20:08.352911     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bdsr_kubernetes-dashboard(85c34e57-0c32-4608-9857-e57d504ed2d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr" podUID="85c34e57-0c32-4608-9857-e57d504ed2d4"
	Nov 24 14:20:08 embed-certs-720293 kubelet[783]: I1124 14:20:08.353092     783 scope.go:117] "RemoveContainer" containerID="17400581bf2ecd7f4c1669300644ca9f310f6dac63342b606cc38a4388a5ab6f"
	Nov 24 14:20:15 embed-certs-720293 kubelet[783]: I1124 14:20:15.858794     783 scope.go:117] "RemoveContainer" containerID="4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5"
	Nov 24 14:20:15 embed-certs-720293 kubelet[783]: E1124 14:20:15.858973     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bdsr_kubernetes-dashboard(85c34e57-0c32-4608-9857-e57d504ed2d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr" podUID="85c34e57-0c32-4608-9857-e57d504ed2d4"
	Nov 24 14:20:17 embed-certs-720293 kubelet[783]: I1124 14:20:17.378004     783 scope.go:117] "RemoveContainer" containerID="39826316da8861c6c390b371b0dfee6fb9b2d796fc941ea2368f1621b3599610"
	Nov 24 14:20:28 embed-certs-720293 kubelet[783]: I1124 14:20:28.065444     783 scope.go:117] "RemoveContainer" containerID="4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5"
	Nov 24 14:20:28 embed-certs-720293 kubelet[783]: I1124 14:20:28.410206     783 scope.go:117] "RemoveContainer" containerID="4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5"
	Nov 24 14:20:28 embed-certs-720293 kubelet[783]: I1124 14:20:28.411070     783 scope.go:117] "RemoveContainer" containerID="f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2"
	Nov 24 14:20:28 embed-certs-720293 kubelet[783]: E1124 14:20:28.411558     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bdsr_kubernetes-dashboard(85c34e57-0c32-4608-9857-e57d504ed2d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr" podUID="85c34e57-0c32-4608-9857-e57d504ed2d4"
	Nov 24 14:20:35 embed-certs-720293 kubelet[783]: I1124 14:20:35.859236     783 scope.go:117] "RemoveContainer" containerID="f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2"
	Nov 24 14:20:35 embed-certs-720293 kubelet[783]: E1124 14:20:35.859941     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bdsr_kubernetes-dashboard(85c34e57-0c32-4608-9857-e57d504ed2d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr" podUID="85c34e57-0c32-4608-9857-e57d504ed2d4"
	Nov 24 14:20:42 embed-certs-720293 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:20:42 embed-certs-720293 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:20:42 embed-certs-720293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [253bcd0f532f66e0e5b2fc4a4c88d5958c2a131d6a5a69048a5a2749195b6547] <==
	2025/11/24 14:20:00 Using namespace: kubernetes-dashboard
	2025/11/24 14:20:00 Using in-cluster config to connect to apiserver
	2025/11/24 14:20:00 Using secret token for csrf signing
	2025/11/24 14:20:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:20:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:20:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 14:20:00 Generating JWE encryption key
	2025/11/24 14:20:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:20:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:20:01 Initializing JWE encryption key from synchronized object
	2025/11/24 14:20:01 Creating in-cluster Sidecar client
	2025/11/24 14:20:01 Serving insecurely on HTTP port: 9090
	2025/11/24 14:20:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:20:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:20:00 Starting overwatch
	
	
	==> storage-provisioner [39826316da8861c6c390b371b0dfee6fb9b2d796fc941ea2368f1621b3599610] <==
	I1124 14:19:46.702703       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:20:16.704797       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [467261f4d2571726e5a5f78ed70bec6f37018a976a06002a0150522a12c9e447] <==
	I1124 14:20:17.462120       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:20:17.462262       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:20:17.466019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:20.921324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:25.181958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:28.782095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:31.836615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:34.858676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:34.868412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:20:34.868584       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:20:34.868777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-720293_6c184d3c-979d-44bc-972a-cc3066323b01!
	I1124 14:20:34.879272       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b678df36-62b8-4640-a341-449d1c1095fb", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-720293_6c184d3c-979d-44bc-972a-cc3066323b01 became leader
	W1124 14:20:34.895289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:34.909784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:20:34.974377       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-720293_6c184d3c-979d-44bc-972a-cc3066323b01!
	W1124 14:20:36.912620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:36.919080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:38.926183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:38.933155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:40.936839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:40.948306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:42.951231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:42.955841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:44.959883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:44.974582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-720293 -n embed-certs-720293
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-720293 -n embed-certs-720293: exit status 2 (438.038644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-720293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-720293
helpers_test.go:243: (dbg) docker inspect embed-certs-720293:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b",
	        "Created": "2025-11-24T14:17:44.795163657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 198952,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:19:32.824615727Z",
	            "FinishedAt": "2025-11-24T14:19:31.995501223Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/hostname",
	        "HostsPath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/hosts",
	        "LogPath": "/var/lib/docker/containers/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b-json.log",
	        "Name": "/embed-certs-720293",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-720293:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-720293",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b",
	                "LowerDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/102ebfc08e7b1e712e2de1b9f813877a1efeaf0db28c4c987f30c212819821a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-720293",
	                "Source": "/var/lib/docker/volumes/embed-certs-720293/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-720293",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-720293",
	                "name.minikube.sigs.k8s.io": "embed-certs-720293",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b8f0fded6a4ad6c1295833399fbeda46e1a1b7d88b11e35f03ef7e574a6475a",
	            "SandboxKey": "/var/run/docker/netns/5b8f0fded6a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-720293": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:5d:d8:4d:25:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c89ad55a017b9d150fec1f0d910c923b1dbfb234d3a49fcfd228e2952fc9581",
	                    "EndpointID": "4b3ca5a0a9dbf74d8b462d663ee82b33e7bf1a726944be4c3fa2c1d40f071c91",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-720293",
	                        "70d00db6e782"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-720293 -n embed-certs-720293
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-720293 -n embed-certs-720293: exit status 2 (364.590245ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-720293 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-720293 logs -n 25: (1.43740863s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:15 UTC │ 24 Nov 25 14:16 UTC │
	│ image   │ old-k8s-version-706771 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │ 24 Nov 25 14:16 UTC │
	│ pause   │ -p old-k8s-version-706771 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:16 UTC │                     │
	│ start   │ -p cert-expiration-032076 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-032076       │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:18 UTC │
	│ delete  │ -p cert-expiration-032076                                                                                                                                                                                                                     │ cert-expiration-032076       │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-444317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │                     │
	│ stop    │ -p no-preload-444317 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ addons  │ enable dashboard -p no-preload-444317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ stop    │ -p embed-certs-720293 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-720293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:20 UTC │
	│ image   │ no-preload-444317 image list --format=json                                                                                                                                                                                                    │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ pause   │ -p no-preload-444317 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p disable-driver-mounts-799392                                                                                                                                                                                                               │ disable-driver-mounts-799392 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │                     │
	│ image   │ embed-certs-720293 image list --format=json                                                                                                                                                                                                   │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ pause   │ -p embed-certs-720293 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:20:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:20:04.198524  202335 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:20:04.198976  202335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:20:04.198986  202335 out.go:374] Setting ErrFile to fd 2...
	I1124 14:20:04.198991  202335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:20:04.199262  202335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:20:04.199735  202335 out.go:368] Setting JSON to false
	I1124 14:20:04.200705  202335 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7356,"bootTime":1763986649,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:20:04.200769  202335 start.go:143] virtualization:  
	I1124 14:20:04.205201  202335 out.go:179] * [default-k8s-diff-port-152851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:20:04.209580  202335 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:20:04.209828  202335 notify.go:221] Checking for updates...
	I1124 14:20:04.218450  202335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:20:04.221692  202335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:20:04.224881  202335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:20:04.228173  202335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:20:04.231299  202335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:20:04.234802  202335 config.go:182] Loaded profile config "embed-certs-720293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:20:04.234965  202335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:20:04.282871  202335 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:20:04.283110  202335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:20:04.392646  202335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:20:04.381964454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:20:04.392760  202335 docker.go:319] overlay module found
	I1124 14:20:04.396815  202335 out.go:179] * Using the docker driver based on user configuration
	I1124 14:20:04.399891  202335 start.go:309] selected driver: docker
	I1124 14:20:04.399923  202335 start.go:927] validating driver "docker" against <nil>
	I1124 14:20:04.399938  202335 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:20:04.400698  202335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:20:04.500553  202335 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:20:04.4908985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:20:04.500798  202335 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:20:04.501155  202335 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:20:04.507862  202335 out.go:179] * Using Docker driver with root privileges
	I1124 14:20:04.510891  202335 cni.go:84] Creating CNI manager for ""
	I1124 14:20:04.510970  202335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:20:04.510990  202335 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:20:04.511075  202335 start.go:353] cluster config:
	{Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:20:04.514628  202335 out.go:179] * Starting "default-k8s-diff-port-152851" primary control-plane node in "default-k8s-diff-port-152851" cluster
	I1124 14:20:04.517789  202335 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:20:04.521016  202335 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:20:04.523870  202335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:20:04.523923  202335 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:20:04.523933  202335 cache.go:65] Caching tarball of preloaded images
	I1124 14:20:04.524027  202335 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:20:04.524042  202335 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:20:04.524150  202335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/config.json ...
	I1124 14:20:04.524181  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/config.json: {Name:mk81282ee6baabf5ef7a33c2dea2ac9e00f5abf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:04.524334  202335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:20:04.553574  202335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:20:04.553597  202335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:20:04.553618  202335 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:20:04.553649  202335 start.go:360] acquireMachinesLock for default-k8s-diff-port-152851: {Name:mke46aeaf15b4ddafe579277f6642f055937b3b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:20:04.553784  202335 start.go:364] duration metric: took 107.021µs to acquireMachinesLock for "default-k8s-diff-port-152851"
	I1124 14:20:04.553817  202335 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:20:04.553965  202335 start.go:125] createHost starting for "" (driver="docker")
	W1124 14:20:02.670651  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:04.672690  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:07.166252  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:04.557666  202335 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:20:04.557962  202335 start.go:159] libmachine.API.Create for "default-k8s-diff-port-152851" (driver="docker")
	I1124 14:20:04.558000  202335 client.go:173] LocalClient.Create starting
	I1124 14:20:04.558095  202335 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem
	I1124 14:20:04.558274  202335 main.go:143] libmachine: Decoding PEM data...
	I1124 14:20:04.558298  202335 main.go:143] libmachine: Parsing certificate...
	I1124 14:20:04.558377  202335 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem
	I1124 14:20:04.558407  202335 main.go:143] libmachine: Decoding PEM data...
	I1124 14:20:04.558438  202335 main.go:143] libmachine: Parsing certificate...
	I1124 14:20:04.558828  202335 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-152851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:20:04.583684  202335 cli_runner.go:211] docker network inspect default-k8s-diff-port-152851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:20:04.583770  202335 network_create.go:284] running [docker network inspect default-k8s-diff-port-152851] to gather additional debugging logs...
	I1124 14:20:04.583786  202335 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-152851
	W1124 14:20:04.602664  202335 cli_runner.go:211] docker network inspect default-k8s-diff-port-152851 returned with exit code 1
	I1124 14:20:04.602697  202335 network_create.go:287] error running [docker network inspect default-k8s-diff-port-152851]: docker network inspect default-k8s-diff-port-152851: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-152851 not found
	I1124 14:20:04.602713  202335 network_create.go:289] output of [docker network inspect default-k8s-diff-port-152851]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-152851 not found
	
	** /stderr **
	I1124 14:20:04.602815  202335 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:20:04.634619  202335 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3087ee9f269 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:07:60:94:e6:54} reservation:<nil>}
	I1124 14:20:04.634970  202335 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-87dca5a19352 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:6c:c1:85:45:94} reservation:<nil>}
	I1124 14:20:04.635512  202335 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9e995bd1b79e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:f1:73:f5:6f:cf} reservation:<nil>}
	I1124 14:20:04.635919  202335 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a40120}
	I1124 14:20:04.635936  202335 network_create.go:124] attempt to create docker network default-k8s-diff-port-152851 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:20:04.635989  202335 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-152851 default-k8s-diff-port-152851
	I1124 14:20:04.707041  202335 network_create.go:108] docker network default-k8s-diff-port-152851 192.168.76.0/24 created
	I1124 14:20:04.707090  202335 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-152851" container
	I1124 14:20:04.707163  202335 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:20:04.735419  202335 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-152851 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-152851 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:20:04.766257  202335 oci.go:103] Successfully created a docker volume default-k8s-diff-port-152851
	I1124 14:20:04.766348  202335 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-152851-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-152851 --entrypoint /usr/bin/test -v default-k8s-diff-port-152851:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:20:05.654315  202335 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-152851
	I1124 14:20:05.654377  202335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:20:05.654386  202335 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:20:05.654457  202335 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-152851:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1124 14:20:09.166849  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:11.168930  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:10.214747  202335 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-152851:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.560254044s)
	I1124 14:20:10.214781  202335 kic.go:203] duration metric: took 4.560391417s to extract preloaded images to volume ...
	W1124 14:20:10.214927  202335 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 14:20:10.215034  202335 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:20:10.285849  202335 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-152851 --name default-k8s-diff-port-152851 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-152851 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-152851 --network default-k8s-diff-port-152851 --ip 192.168.76.2 --volume default-k8s-diff-port-152851:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:20:10.594232  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Running}}
	I1124 14:20:10.611266  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:10.639225  202335 cli_runner.go:164] Run: docker exec default-k8s-diff-port-152851 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:20:10.696366  202335 oci.go:144] the created container "default-k8s-diff-port-152851" has a running status.
	I1124 14:20:10.696400  202335 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa...
	I1124 14:20:11.119690  202335 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:20:11.146515  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:11.168183  202335 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:20:11.168207  202335 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-152851 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:20:11.213778  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:11.231702  202335 machine.go:94] provisionDockerMachine start ...
	I1124 14:20:11.231799  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:11.249363  202335 main.go:143] libmachine: Using SSH client type: native
	I1124 14:20:11.249713  202335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 14:20:11.249727  202335 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:20:11.250305  202335 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48962->127.0.0.1:33078: read: connection reset by peer
	I1124 14:20:14.403266  202335 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-152851
	
	I1124 14:20:14.403289  202335 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-152851"
	I1124 14:20:14.403425  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:14.421245  202335 main.go:143] libmachine: Using SSH client type: native
	I1124 14:20:14.421644  202335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 14:20:14.421665  202335 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-152851 && echo "default-k8s-diff-port-152851" | sudo tee /etc/hostname
	I1124 14:20:14.589533  202335 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-152851
	
	I1124 14:20:14.589613  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:14.609605  202335 main.go:143] libmachine: Using SSH client type: native
	I1124 14:20:14.610187  202335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 14:20:14.610211  202335 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-152851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-152851/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-152851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:20:14.763857  202335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:20:14.763967  202335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:20:14.764043  202335 ubuntu.go:190] setting up certificates
	I1124 14:20:14.764084  202335 provision.go:84] configureAuth start
	I1124 14:20:14.764209  202335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:20:14.782935  202335 provision.go:143] copyHostCerts
	I1124 14:20:14.783016  202335 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:20:14.783025  202335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:20:14.783116  202335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:20:14.783238  202335 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:20:14.783244  202335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:20:14.783282  202335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:20:14.783418  202335 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:20:14.783426  202335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:20:14.783463  202335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:20:14.783557  202335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-152851 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-152851 localhost minikube]
	I1124 14:20:15.087707  202335 provision.go:177] copyRemoteCerts
	I1124 14:20:15.087813  202335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:20:15.087883  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:15.105542  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:15.211038  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:20:15.229178  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 14:20:15.247466  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:20:15.265068  202335 provision.go:87] duration metric: took 500.947008ms to configureAuth
	I1124 14:20:15.265099  202335 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:20:15.265293  202335 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:20:15.265406  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:15.282436  202335 main.go:143] libmachine: Using SSH client type: native
	I1124 14:20:15.282756  202335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 14:20:15.282769  202335 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:20:15.670282  202335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:20:15.670306  202335 machine.go:97] duration metric: took 4.438580402s to provisionDockerMachine
	I1124 14:20:15.670317  202335 client.go:176] duration metric: took 11.112309664s to LocalClient.Create
	I1124 14:20:15.670332  202335 start.go:167] duration metric: took 11.112371178s to libmachine.API.Create "default-k8s-diff-port-152851"
	I1124 14:20:15.670352  202335 start.go:293] postStartSetup for "default-k8s-diff-port-152851" (driver="docker")
	I1124 14:20:15.670366  202335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:20:15.670438  202335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:20:15.670486  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:15.687858  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:15.795621  202335 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:20:15.798925  202335 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:20:15.798954  202335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:20:15.798966  202335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:20:15.799021  202335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:20:15.799098  202335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:20:15.799200  202335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:20:15.806825  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:20:15.824887  202335 start.go:296] duration metric: took 154.516454ms for postStartSetup
	I1124 14:20:15.825270  202335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:20:15.842546  202335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/config.json ...
	I1124 14:20:15.842841  202335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:20:15.842894  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:15.864860  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:15.972921  202335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:20:15.981464  202335 start.go:128] duration metric: took 11.427482026s to createHost
	I1124 14:20:15.981488  202335 start.go:83] releasing machines lock for "default-k8s-diff-port-152851", held for 11.427690218s
	I1124 14:20:15.981562  202335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:20:15.999018  202335 ssh_runner.go:195] Run: cat /version.json
	I1124 14:20:15.999074  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:15.999295  202335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:20:15.999427  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:16.025507  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:16.038110  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:16.131726  202335 ssh_runner.go:195] Run: systemctl --version
	I1124 14:20:16.227734  202335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:20:16.272290  202335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:20:16.276779  202335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:20:16.276860  202335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:20:16.305624  202335 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:20:16.305649  202335 start.go:496] detecting cgroup driver to use...
	I1124 14:20:16.305681  202335 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:20:16.305741  202335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:20:16.324594  202335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:20:16.337355  202335 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:20:16.337441  202335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:20:16.355152  202335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:20:16.375011  202335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:20:16.501048  202335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:20:16.639974  202335 docker.go:234] disabling docker service ...
	I1124 14:20:16.640092  202335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:20:16.661507  202335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:20:16.678764  202335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:20:16.809163  202335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:20:16.939393  202335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:20:16.953524  202335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:20:16.968120  202335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:20:16.968215  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:16.978073  202335 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:20:16.978156  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:16.987659  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:16.997483  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:17.008824  202335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:20:17.018192  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:17.027749  202335 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:17.042176  202335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:20:17.051563  202335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:20:17.059572  202335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:20:17.067106  202335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:20:17.190205  202335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:20:17.367422  202335 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:20:17.367544  202335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:20:17.372313  202335 start.go:564] Will wait 60s for crictl version
	I1124 14:20:17.372419  202335 ssh_runner.go:195] Run: which crictl
	I1124 14:20:17.379641  202335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:20:17.427228  202335 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:20:17.427342  202335 ssh_runner.go:195] Run: crio --version
	I1124 14:20:17.469021  202335 ssh_runner.go:195] Run: crio --version
	I1124 14:20:17.505376  202335 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1124 14:20:13.667196  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:16.168503  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:17.508284  202335 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-152851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:20:17.525499  202335 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:20:17.529738  202335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:20:17.541127  202335 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:20:17.541255  202335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:20:17.541313  202335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:20:17.583559  202335 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:20:17.583585  202335 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:20:17.583642  202335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:20:17.611510  202335 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:20:17.611534  202335 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:20:17.611543  202335 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 14:20:17.611665  202335 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-152851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:20:17.611748  202335 ssh_runner.go:195] Run: crio config
	I1124 14:20:17.676085  202335 cni.go:84] Creating CNI manager for ""
	I1124 14:20:17.676158  202335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:20:17.676187  202335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:20:17.676240  202335 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-152851 NodeName:default-k8s-diff-port-152851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:20:17.676408  202335 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-152851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:20:17.676514  202335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:20:17.684937  202335 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:20:17.685051  202335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:20:17.692619  202335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 14:20:17.706039  202335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:20:17.718987  202335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1124 14:20:17.736136  202335 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:20:17.739715  202335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:20:17.749652  202335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:20:17.873752  202335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:20:17.898314  202335 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851 for IP: 192.168.76.2
	I1124 14:20:17.898382  202335 certs.go:195] generating shared ca certs ...
	I1124 14:20:17.898413  202335 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:17.898623  202335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:20:17.898692  202335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:20:17.898727  202335 certs.go:257] generating profile certs ...
	I1124 14:20:17.898803  202335 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.key
	I1124 14:20:17.898842  202335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt with IP's: []
	I1124 14:20:19.076842  202335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt ...
	I1124 14:20:19.076926  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: {Name:mk866d39bdecb66e85282d9e13c4f03f10d86194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.077153  202335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.key ...
	I1124 14:20:19.077170  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.key: {Name:mkc00caae046e2cc48f44b7b7aac04ba6a21dd99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.077296  202335 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key.ec9a3231
	I1124 14:20:19.077318  202335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt.ec9a3231 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 14:20:19.512402  202335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt.ec9a3231 ...
	I1124 14:20:19.512433  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt.ec9a3231: {Name:mk1d05d9f03c5191ef9c06f9dd3bd1f5350ffbad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.512629  202335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key.ec9a3231 ...
	I1124 14:20:19.512646  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key.ec9a3231: {Name:mk058e0ece22697ffefcd3af3bfdb811e8504c78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.512728  202335 certs.go:382] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt.ec9a3231 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt
	I1124 14:20:19.512813  202335 certs.go:386] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key.ec9a3231 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key
	I1124 14:20:19.512893  202335 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key
	I1124 14:20:19.512914  202335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.crt with IP's: []
	I1124 14:20:19.745759  202335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.crt ...
	I1124 14:20:19.745788  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.crt: {Name:mk30be739c9f6f3ed1e613e63c6f7abce6dcd053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.745969  202335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key ...
	I1124 14:20:19.745984  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key: {Name:mk6dff98d9974ded0b0a41ba88fda39d136b1e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:19.746207  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:20:19.746255  202335 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:20:19.746271  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:20:19.746298  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:20:19.746328  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:20:19.746357  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:20:19.746406  202335 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:20:19.747056  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:20:19.765341  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:20:19.784048  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:20:19.803039  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:20:19.822916  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 14:20:19.842479  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:20:19.861445  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:20:19.880399  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:20:19.903142  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:20:19.923840  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:20:19.950883  202335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:20:19.972825  202335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:20:19.987996  202335 ssh_runner.go:195] Run: openssl version
	I1124 14:20:19.994941  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:20:20.013822  202335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:20:20.019615  202335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:20:20.019692  202335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:20:20.068389  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:20:20.077977  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:20:20.088160  202335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:20:20.093054  202335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:20:20.093131  202335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:20:20.140641  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:20:20.149628  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:20:20.160237  202335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:20:20.165466  202335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:20:20.165550  202335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:20:20.207861  202335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:20:20.216685  202335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:20:20.221354  202335 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:20:20.221408  202335 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:20:20.221484  202335 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:20:20.221557  202335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:20:20.260072  202335 cri.go:89] found id: ""
	I1124 14:20:20.260139  202335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:20:20.273893  202335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:20:20.287746  202335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:20:20.287816  202335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:20:20.296116  202335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:20:20.296134  202335 kubeadm.go:158] found existing configuration files:
	
	I1124 14:20:20.296188  202335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 14:20:20.305332  202335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:20:20.305439  202335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:20:20.313623  202335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 14:20:20.321800  202335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:20:20.321915  202335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:20:20.329659  202335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 14:20:20.337839  202335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:20:20.337937  202335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:20:20.345901  202335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 14:20:20.354347  202335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:20:20.354444  202335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:20:20.362305  202335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:20:20.413368  202335 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:20:20.413595  202335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:20:20.440188  202335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:20:20.440266  202335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:20:20.440307  202335 kubeadm.go:319] OS: Linux
	I1124 14:20:20.440356  202335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:20:20.440410  202335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:20:20.440463  202335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:20:20.440521  202335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:20:20.440583  202335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:20:20.440640  202335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:20:20.440691  202335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:20:20.440744  202335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:20:20.440794  202335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:20:20.514418  202335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:20:20.514539  202335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:20:20.514638  202335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:20:20.523871  202335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1124 14:20:18.670755  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:21.170616  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:20.528169  202335 out.go:252]   - Generating certificates and keys ...
	I1124 14:20:20.528333  202335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:20:20.528449  202335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:20:21.284082  202335 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:20:21.744823  202335 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:20:22.641608  202335 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:20:22.861618  202335 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:20:23.289517  202335 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:20:23.289699  202335 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-152851 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:20:23.583104  202335 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:20:23.583463  202335 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-152851 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1124 14:20:23.666970  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	W1124 14:20:26.168532  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:24.508728  202335 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:20:25.604277  202335 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:20:26.093530  202335 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:20:26.093810  202335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:20:27.214557  202335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:20:27.370550  202335 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:20:27.491910  202335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:20:28.164654  202335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:20:28.378543  202335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:20:28.379163  202335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:20:28.382943  202335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:20:28.386160  202335 out.go:252]   - Booting up control plane ...
	I1124 14:20:28.386265  202335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:20:28.386349  202335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:20:28.387194  202335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:20:28.440098  202335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:20:28.440206  202335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:20:28.451707  202335 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:20:28.452072  202335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:20:28.452296  202335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:20:28.594949  202335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:20:28.595070  202335 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1124 14:20:28.169115  198824 pod_ready.go:104] pod "coredns-66bc5c9577-6nztq" is not "Ready", error: <nil>
	I1124 14:20:28.669170  198824 pod_ready.go:94] pod "coredns-66bc5c9577-6nztq" is "Ready"
	I1124 14:20:28.669199  198824 pod_ready.go:86] duration metric: took 41.008302381s for pod "coredns-66bc5c9577-6nztq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.672683  198824 pod_ready.go:83] waiting for pod "etcd-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.679991  198824 pod_ready.go:94] pod "etcd-embed-certs-720293" is "Ready"
	I1124 14:20:28.680019  198824 pod_ready.go:86] duration metric: took 7.309484ms for pod "etcd-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.682843  198824 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.688281  198824 pod_ready.go:94] pod "kube-apiserver-embed-certs-720293" is "Ready"
	I1124 14:20:28.688305  198824 pod_ready.go:86] duration metric: took 5.437235ms for pod "kube-apiserver-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.690924  198824 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:28.865493  198824 pod_ready.go:94] pod "kube-controller-manager-embed-certs-720293" is "Ready"
	I1124 14:20:28.865525  198824 pod_ready.go:86] duration metric: took 174.576542ms for pod "kube-controller-manager-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:29.065560  198824 pod_ready.go:83] waiting for pod "kube-proxy-pwpl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:29.465517  198824 pod_ready.go:94] pod "kube-proxy-pwpl4" is "Ready"
	I1124 14:20:29.465542  198824 pod_ready.go:86] duration metric: took 399.904007ms for pod "kube-proxy-pwpl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:29.665874  198824 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:30.072055  198824 pod_ready.go:94] pod "kube-scheduler-embed-certs-720293" is "Ready"
	I1124 14:20:30.072084  198824 pod_ready.go:86] duration metric: took 406.18527ms for pod "kube-scheduler-embed-certs-720293" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:20:30.072097  198824 pod_ready.go:40] duration metric: took 42.415483801s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:20:30.161499  198824 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:20:30.164621  198824 out.go:179] * Done! kubectl is now configured to use "embed-certs-720293" cluster and "default" namespace by default
	I1124 14:20:30.595418  202335 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000837956s
	I1124 14:20:30.599210  202335 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:20:30.599692  202335 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1124 14:20:30.600013  202335 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:20:30.600813  202335 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:20:33.733116  202335 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.131881343s
	I1124 14:20:35.607201  202335 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.00572933s
	I1124 14:20:37.102044  202335 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501587155s
	I1124 14:20:37.122839  202335 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:20:37.139639  202335 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:20:37.155808  202335 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:20:37.156021  202335 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-152851 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:20:37.169343  202335 kubeadm.go:319] [bootstrap-token] Using token: 1r0ssm.nhq1upa2er09iuov
	I1124 14:20:37.172446  202335 out.go:252]   - Configuring RBAC rules ...
	I1124 14:20:37.172599  202335 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:20:37.178899  202335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:20:37.189849  202335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:20:37.194492  202335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:20:37.201072  202335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:20:37.205644  202335 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:20:37.517471  202335 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:20:37.955683  202335 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:20:38.509015  202335 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:20:38.511106  202335 kubeadm.go:319] 
	I1124 14:20:38.511182  202335 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:20:38.511188  202335 kubeadm.go:319] 
	I1124 14:20:38.511265  202335 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:20:38.511276  202335 kubeadm.go:319] 
	I1124 14:20:38.511302  202335 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:20:38.513944  202335 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:20:38.514031  202335 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:20:38.514046  202335 kubeadm.go:319] 
	I1124 14:20:38.514102  202335 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:20:38.514106  202335 kubeadm.go:319] 
	I1124 14:20:38.514166  202335 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:20:38.514187  202335 kubeadm.go:319] 
	I1124 14:20:38.514263  202335 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:20:38.514347  202335 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:20:38.514422  202335 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:20:38.514426  202335 kubeadm.go:319] 
	I1124 14:20:38.514518  202335 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:20:38.514602  202335 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:20:38.514611  202335 kubeadm.go:319] 
	I1124 14:20:38.514719  202335 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 1r0ssm.nhq1upa2er09iuov \
	I1124 14:20:38.514851  202335 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f \
	I1124 14:20:38.514880  202335 kubeadm.go:319] 	--control-plane 
	I1124 14:20:38.514887  202335 kubeadm.go:319] 
	I1124 14:20:38.514972  202335 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:20:38.514978  202335 kubeadm.go:319] 
	I1124 14:20:38.515056  202335 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 1r0ssm.nhq1upa2er09iuov \
	I1124 14:20:38.515160  202335 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f 
	I1124 14:20:38.520929  202335 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:20:38.521160  202335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:20:38.521271  202335 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:20:38.521292  202335 cni.go:84] Creating CNI manager for ""
	I1124 14:20:38.521304  202335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:20:38.526552  202335 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:20:38.529412  202335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:20:38.533787  202335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:20:38.533808  202335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:20:38.551778  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:20:38.857833  202335 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:20:38.858000  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:38.858070  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-152851 minikube.k8s.io/updated_at=2025_11_24T14_20_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=default-k8s-diff-port-152851 minikube.k8s.io/primary=true
	I1124 14:20:39.034165  202335 ops.go:34] apiserver oom_adj: -16
	I1124 14:20:39.034276  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:39.534698  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:40.034463  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:40.534556  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:41.035427  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:41.534422  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:42.034686  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:42.534391  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:43.034342  202335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:20:43.150488  202335 kubeadm.go:1114] duration metric: took 4.292549445s to wait for elevateKubeSystemPrivileges
	I1124 14:20:43.150516  202335 kubeadm.go:403] duration metric: took 22.929111223s to StartCluster
	I1124 14:20:43.150533  202335 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:43.150598  202335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:20:43.152144  202335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:43.152393  202335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:20:43.152633  202335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:20:43.152951  202335 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:20:43.152989  202335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:20:43.153047  202335 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-152851"
	I1124 14:20:43.153064  202335 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-152851"
	I1124 14:20:43.153086  202335 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:20:43.153593  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:43.153859  202335 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-152851"
	I1124 14:20:43.153877  202335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-152851"
	I1124 14:20:43.154145  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:43.157463  202335 out.go:179] * Verifying Kubernetes components...
	I1124 14:20:43.161249  202335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:20:43.194174  202335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:20:43.198436  202335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:20:43.198461  202335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:20:43.198524  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:43.200325  202335 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-152851"
	I1124 14:20:43.200367  202335 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:20:43.200836  202335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:20:43.241498  202335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:20:43.241521  202335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:20:43.241582  202335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:20:43.245598  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:43.279601  202335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:20:43.717447  202335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:20:43.733813  202335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:20:43.830734  202335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:20:43.830873  202335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:20:45.128194  202335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.394339727s)
	I1124 14:20:45.128424  202335 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.297530715s)
	I1124 14:20:45.129665  202335 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-152851" to be "Ready" ...
	I1124 14:20:45.129957  202335 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.299180505s)
	I1124 14:20:45.129975  202335 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 14:20:45.133649  202335 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.818988801Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.831652631Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.831822552Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.831931493Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.837507535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.837676784Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.837754783Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.841301774Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.841489452Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.841605498Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.844885122Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:20:26 embed-certs-720293 crio[655]: time="2025-11-24T14:20:26.845033645Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.066831244Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6913cd53-04bb-4297-8f73-6b24c5157bd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.068731612Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9c499223-7794-4e7b-a3ef-d17d8dea661b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.070019347Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr/dashboard-metrics-scraper" id=3114f366-33be-4ee8-a6ff-d65da75abba6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.070157908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.087282256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.089341084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.107189419Z" level=info msg="Created container f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr/dashboard-metrics-scraper" id=3114f366-33be-4ee8-a6ff-d65da75abba6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.112781264Z" level=info msg="Starting container: f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2" id=11b97435-65b8-418c-8c48-d2da8a8a9be6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.117439741Z" level=info msg="Started container" PID=1688 containerID=f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr/dashboard-metrics-scraper id=11b97435-65b8-418c-8c48-d2da8a8a9be6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4b58f0630032bd62bbb8380ce0879fefa25fbe4c4be592c4771b858d88e057fb
	Nov 24 14:20:28 embed-certs-720293 conmon[1686]: conmon f4b8663a77e015998912 <ninfo>: container 1688 exited with status 1
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.420493421Z" level=info msg="Removing container: 4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5" id=1311d61f-3c12-43c9-a4b3-7d46b89a3ee2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.443627145Z" level=info msg="Error loading conmon cgroup of container 4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5: cgroup deleted" id=1311d61f-3c12-43c9-a4b3-7d46b89a3ee2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:20:28 embed-certs-720293 crio[655]: time="2025-11-24T14:20:28.450960932Z" level=info msg="Removed container 4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr/dashboard-metrics-scraper" id=1311d61f-3c12-43c9-a4b3-7d46b89a3ee2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f4b8663a77e01       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   4b58f0630032b       dashboard-metrics-scraper-6ffb444bf9-2bdsr   kubernetes-dashboard
	467261f4d2571       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           30 seconds ago       Running             storage-provisioner         2                   50b77adcdb377       storage-provisioner                          kube-system
	253bcd0f532f6       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   656761d966e4f       kubernetes-dashboard-855c9754f9-7rfrv        kubernetes-dashboard
	42aec503a0da6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   c3c57dc5ba36e       coredns-66bc5c9577-6nztq                     kube-system
	783bd123b9780       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   70b1ac05fea3f       busybox                                      default
	00a43be63f0ee       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   af799f14d2ed4       kube-proxy-pwpl4                             kube-system
	39826316da886       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   50b77adcdb377       storage-provisioner                          kube-system
	cd3870847e89f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   834410470ac72       kindnet-ft88w                                kube-system
	7a20914603732       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6fa44c90844da       kube-controller-manager-embed-certs-720293   kube-system
	7634741324dd1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   24b07d8445ab9       kube-scheduler-embed-certs-720293            kube-system
	43d901c75e4d3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   760bd3b69d313       etcd-embed-certs-720293                      kube-system
	da3f0798a706d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   2f2b000532175       kube-apiserver-embed-certs-720293            kube-system
	
	
	==> coredns [42aec503a0da6a36b971d1c7b96c464efde9b44c146277076e103b249a49c5de] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50117 - 31578 "HINFO IN 3175689223186040760.3352707592823412203. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034811987s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-720293
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-720293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=embed-certs-720293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_18_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:18:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-720293
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:20:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:20:16 +0000   Mon, 24 Nov 2025 14:18:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:20:16 +0000   Mon, 24 Nov 2025 14:18:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:20:16 +0000   Mon, 24 Nov 2025 14:18:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:20:16 +0000   Mon, 24 Nov 2025 14:19:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-720293
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                f982cc7c-133c-414c-b480-dd4b30e870c6
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-6nztq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 etcd-embed-certs-720293                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m31s
	  kube-system                 kindnet-ft88w                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-embed-certs-720293             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-embed-certs-720293    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-pwpl4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-embed-certs-720293             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2bdsr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7rfrv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m24s                  kube-proxy       
	  Normal   Starting                 61s                    kube-proxy       
	  Normal   Starting                 2m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m40s (x8 over 2m40s)  kubelet          Node embed-certs-720293 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node embed-certs-720293 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m40s (x8 over 2m40s)  kubelet          Node embed-certs-720293 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m31s                  kubelet          Node embed-certs-720293 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m31s                  kubelet          Node embed-certs-720293 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m31s                  kubelet          Node embed-certs-720293 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m27s                  node-controller  Node embed-certs-720293 event: Registered Node embed-certs-720293 in Controller
	  Normal   NodeReady                105s                   kubelet          Node embed-certs-720293 status is now: NodeReady
	  Normal   Starting                 69s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node embed-certs-720293 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node embed-certs-720293 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node embed-certs-720293 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s                    node-controller  Node embed-certs-720293 event: Registered Node embed-certs-720293 in Controller
	
	
	==> dmesg <==
	[Nov24 13:56] overlayfs: idmapped layers are currently not supported
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	[ +49.916713] overlayfs: idmapped layers are currently not supported
	[Nov24 14:19] overlayfs: idmapped layers are currently not supported
	[Nov24 14:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [43d901c75e4d3ea7cfdd826b2f38e870e2be39de21570400fd187f7a2239344b] <==
	{"level":"warn","ts":"2025-11-24T14:19:43.805983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:43.883984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:43.894997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:43.933223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:43.965539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:43.976987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.006753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.034569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.066906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.104286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.132263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.189674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.224792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.245726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.271709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.289448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.307074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.336224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.353720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.371558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.389413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.424364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.442164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.461636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:19:44.526551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:20:48 up  2:03,  0 user,  load average: 3.44, 3.01, 2.55
	Linux embed-certs-720293 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cd3870847e89f7ad9748689f39400529da3e34ea80bd2e9c5d50b94014870174] <==
	I1124 14:19:46.620918       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:19:46.621408       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:19:46.621572       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:19:46.621613       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:19:46.621656       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:19:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:19:46.830970       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:19:46.831002       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:19:46.831026       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:19:46.831179       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:20:16.816224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:20:16.820870       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:20:16.821087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:20:16.821239       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:20:18.231650       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:20:18.231771       1 metrics.go:72] Registering metrics
	I1124 14:20:18.231874       1 controller.go:711] "Syncing nftables rules"
	I1124 14:20:26.818483       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:20:26.818676       1 main.go:301] handling current node
	I1124 14:20:36.815227       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:20:36.815442       1 main.go:301] handling current node
	I1124 14:20:46.824351       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:20:46.824380       1 main.go:301] handling current node
	
	
	==> kube-apiserver [da3f0798a706df28b161fc15c24ff964503411fba4af93d09ab0786003dc32ea] <==
	I1124 14:19:45.540801       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:19:45.545449       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:19:45.577324       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 14:19:45.577358       1 policy_source.go:240] refreshing policies
	I1124 14:19:45.579015       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:19:45.579070       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:19:45.579089       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 14:19:45.626736       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 14:19:45.626769       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 14:19:45.626962       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 14:19:45.631682       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:19:45.655884       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:19:45.675634       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1124 14:19:45.769252       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:19:46.069231       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:19:46.337126       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:19:46.510986       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:19:46.646107       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:19:46.772580       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:19:46.804272       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:19:46.945122       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.116.153"}
	I1124 14:19:46.983751       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.207.16"}
	I1124 14:19:49.099549       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:19:49.398040       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:19:49.451389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7a20914603732648c5d9ff34200e808b2002ae00dc4000fe37adb370011a3888] <==
	I1124 14:19:48.963166       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-720293"
	I1124 14:19:48.963212       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:19:48.963727       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:19:48.966425       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:19:48.969588       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:19:48.969718       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 14:19:48.970047       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:19:48.970118       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:19:48.971079       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:19:48.973396       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:19:48.973810       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 14:19:48.976014       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:19:48.976332       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:19:48.978577       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:19:48.980873       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:19:48.983801       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:19:48.990900       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:19:48.991009       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 14:19:48.992115       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:19:48.992186       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 14:19:48.992324       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:19:48.992340       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:19:48.992347       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:19:48.992469       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:19:49.021303       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [00a43be63f0ee0c7c3caa8dd1d91a6db23be6515f0e5612d27abb6cdc903cf4b] <==
	I1124 14:19:46.849346       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:19:47.086885       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:19:47.187594       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:19:47.187632       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:19:47.187704       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:19:47.220988       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:19:47.221045       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:19:47.227528       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:19:47.227948       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:19:47.227975       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:19:47.230528       1 config.go:200] "Starting service config controller"
	I1124 14:19:47.230551       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:19:47.230570       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:19:47.230575       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:19:47.230588       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:19:47.230593       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:19:47.234142       1 config.go:309] "Starting node config controller"
	I1124 14:19:47.234189       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:19:47.234202       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:19:47.330662       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:19:47.330675       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:19:47.330736       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7634741324dd1d91cc93df52ab62f4e54882e2826f3185dee5ff5c38bdffd3cf] <==
	I1124 14:19:43.081282       1 serving.go:386] Generated self-signed cert in-memory
	W1124 14:19:45.546451       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 14:19:45.546492       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 14:19:45.546502       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 14:19:45.546510       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 14:19:45.606719       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:19:45.621182       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:19:45.633327       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:19:45.634020       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:19:45.634079       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:19:45.647811       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:19:45.748503       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:19:46 embed-certs-720293 kubelet[783]: W1124 14:19:46.466418     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/crio-70b1ac05fea3f99efdbc12ddc3e17d202d6a6100ad046026db1d32036ca60995 WatchSource:0}: Error finding container 70b1ac05fea3f99efdbc12ddc3e17d202d6a6100ad046026db1d32036ca60995: Status 404 returned error can't find the container with id 70b1ac05fea3f99efdbc12ddc3e17d202d6a6100ad046026db1d32036ca60995
	Nov 24 14:19:46 embed-certs-720293 kubelet[783]: W1124 14:19:46.549501     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/crio-c3c57dc5ba36e850403c489378062a88fb81de4f608bc351d77f316149e3d441 WatchSource:0}: Error finding container c3c57dc5ba36e850403c489378062a88fb81de4f608bc351d77f316149e3d441: Status 404 returned error can't find the container with id c3c57dc5ba36e850403c489378062a88fb81de4f608bc351d77f316149e3d441
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: I1124 14:19:49.676538     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99bwc\" (UniqueName: \"kubernetes.io/projected/85c34e57-0c32-4608-9857-e57d504ed2d4-kube-api-access-99bwc\") pod \"dashboard-metrics-scraper-6ffb444bf9-2bdsr\" (UID: \"85c34e57-0c32-4608-9857-e57d504ed2d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr"
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: I1124 14:19:49.676602     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/85c34e57-0c32-4608-9857-e57d504ed2d4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2bdsr\" (UID: \"85c34e57-0c32-4608-9857-e57d504ed2d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr"
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: I1124 14:19:49.676628     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts8qc\" (UniqueName: \"kubernetes.io/projected/54479f7d-df5f-4bdb-9bf0-fffe91f3f263-kube-api-access-ts8qc\") pod \"kubernetes-dashboard-855c9754f9-7rfrv\" (UID: \"54479f7d-df5f-4bdb-9bf0-fffe91f3f263\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7rfrv"
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: I1124 14:19:49.676651     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/54479f7d-df5f-4bdb-9bf0-fffe91f3f263-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-7rfrv\" (UID: \"54479f7d-df5f-4bdb-9bf0-fffe91f3f263\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7rfrv"
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: W1124 14:19:49.929156     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/crio-656761d966e4fe27ff91bbf258cdec3239ee8b3f4546005eb1862ea6c00cea8a WatchSource:0}: Error finding container 656761d966e4fe27ff91bbf258cdec3239ee8b3f4546005eb1862ea6c00cea8a: Status 404 returned error can't find the container with id 656761d966e4fe27ff91bbf258cdec3239ee8b3f4546005eb1862ea6c00cea8a
	Nov 24 14:19:49 embed-certs-720293 kubelet[783]: W1124 14:19:49.951473     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/70d00db6e7822d3b00ce565e804c7ecaca79c8fb11b2d568f3f30fb3df09a34b/crio-4b58f0630032bd62bbb8380ce0879fefa25fbe4c4be592c4771b858d88e057fb WatchSource:0}: Error finding container 4b58f0630032bd62bbb8380ce0879fefa25fbe4c4be592c4771b858d88e057fb: Status 404 returned error can't find the container with id 4b58f0630032bd62bbb8380ce0879fefa25fbe4c4be592c4771b858d88e057fb
	Nov 24 14:20:00 embed-certs-720293 kubelet[783]: I1124 14:20:00.434327     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7rfrv" podStartSLOduration=1.474627137 podStartE2EDuration="11.434305677s" podCreationTimestamp="2025-11-24 14:19:49 +0000 UTC" firstStartedPulling="2025-11-24 14:19:49.934126072 +0000 UTC m=+10.103278256" lastFinishedPulling="2025-11-24 14:19:59.89380462 +0000 UTC m=+20.062956796" observedRunningTime="2025-11-24 14:20:00.43321614 +0000 UTC m=+20.602368315" watchObservedRunningTime="2025-11-24 14:20:00.434305677 +0000 UTC m=+20.603457968"
	Nov 24 14:20:07 embed-certs-720293 kubelet[783]: I1124 14:20:07.348455     783 scope.go:117] "RemoveContainer" containerID="17400581bf2ecd7f4c1669300644ca9f310f6dac63342b606cc38a4388a5ab6f"
	Nov 24 14:20:08 embed-certs-720293 kubelet[783]: I1124 14:20:08.352750     783 scope.go:117] "RemoveContainer" containerID="4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5"
	Nov 24 14:20:08 embed-certs-720293 kubelet[783]: E1124 14:20:08.352911     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bdsr_kubernetes-dashboard(85c34e57-0c32-4608-9857-e57d504ed2d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr" podUID="85c34e57-0c32-4608-9857-e57d504ed2d4"
	Nov 24 14:20:08 embed-certs-720293 kubelet[783]: I1124 14:20:08.353092     783 scope.go:117] "RemoveContainer" containerID="17400581bf2ecd7f4c1669300644ca9f310f6dac63342b606cc38a4388a5ab6f"
	Nov 24 14:20:15 embed-certs-720293 kubelet[783]: I1124 14:20:15.858794     783 scope.go:117] "RemoveContainer" containerID="4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5"
	Nov 24 14:20:15 embed-certs-720293 kubelet[783]: E1124 14:20:15.858973     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bdsr_kubernetes-dashboard(85c34e57-0c32-4608-9857-e57d504ed2d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr" podUID="85c34e57-0c32-4608-9857-e57d504ed2d4"
	Nov 24 14:20:17 embed-certs-720293 kubelet[783]: I1124 14:20:17.378004     783 scope.go:117] "RemoveContainer" containerID="39826316da8861c6c390b371b0dfee6fb9b2d796fc941ea2368f1621b3599610"
	Nov 24 14:20:28 embed-certs-720293 kubelet[783]: I1124 14:20:28.065444     783 scope.go:117] "RemoveContainer" containerID="4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5"
	Nov 24 14:20:28 embed-certs-720293 kubelet[783]: I1124 14:20:28.410206     783 scope.go:117] "RemoveContainer" containerID="4381ccbfaceff81d563e9d804d5e1a38d69d47f328787b1031d0cc609bc89bf5"
	Nov 24 14:20:28 embed-certs-720293 kubelet[783]: I1124 14:20:28.411070     783 scope.go:117] "RemoveContainer" containerID="f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2"
	Nov 24 14:20:28 embed-certs-720293 kubelet[783]: E1124 14:20:28.411558     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bdsr_kubernetes-dashboard(85c34e57-0c32-4608-9857-e57d504ed2d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr" podUID="85c34e57-0c32-4608-9857-e57d504ed2d4"
	Nov 24 14:20:35 embed-certs-720293 kubelet[783]: I1124 14:20:35.859236     783 scope.go:117] "RemoveContainer" containerID="f4b8663a77e015998912426d36c1f9b5b969ea5de94d97c635621222587cc7c2"
	Nov 24 14:20:35 embed-certs-720293 kubelet[783]: E1124 14:20:35.859941     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bdsr_kubernetes-dashboard(85c34e57-0c32-4608-9857-e57d504ed2d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bdsr" podUID="85c34e57-0c32-4608-9857-e57d504ed2d4"
	Nov 24 14:20:42 embed-certs-720293 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:20:42 embed-certs-720293 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:20:42 embed-certs-720293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [253bcd0f532f66e0e5b2fc4a4c88d5958c2a131d6a5a69048a5a2749195b6547] <==
	2025/11/24 14:20:00 Using namespace: kubernetes-dashboard
	2025/11/24 14:20:00 Using in-cluster config to connect to apiserver
	2025/11/24 14:20:00 Using secret token for csrf signing
	2025/11/24 14:20:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:20:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:20:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 14:20:00 Generating JWE encryption key
	2025/11/24 14:20:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:20:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:20:01 Initializing JWE encryption key from synchronized object
	2025/11/24 14:20:01 Creating in-cluster Sidecar client
	2025/11/24 14:20:01 Serving insecurely on HTTP port: 9090
	2025/11/24 14:20:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:20:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:20:00 Starting overwatch
	
	
	==> storage-provisioner [39826316da8861c6c390b371b0dfee6fb9b2d796fc941ea2368f1621b3599610] <==
	I1124 14:19:46.702703       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:20:16.704797       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [467261f4d2571726e5a5f78ed70bec6f37018a976a06002a0150522a12c9e447] <==
	W1124 14:20:17.466019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:20.921324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:25.181958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:28.782095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:31.836615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:34.858676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:34.868412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:20:34.868584       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:20:34.868777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-720293_6c184d3c-979d-44bc-972a-cc3066323b01!
	I1124 14:20:34.879272       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b678df36-62b8-4640-a341-449d1c1095fb", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-720293_6c184d3c-979d-44bc-972a-cc3066323b01 became leader
	W1124 14:20:34.895289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:34.909784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:20:34.974377       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-720293_6c184d3c-979d-44bc-972a-cc3066323b01!
	W1124 14:20:36.912620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:36.919080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:38.926183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:38.933155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:40.936839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:40.948306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:42.951231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:42.955841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:44.959883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:44.974582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:46.978981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:20:46.986232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-720293 -n embed-certs-720293
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-720293 -n embed-certs-720293: exit status 2 (404.72604ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-720293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-948249 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-948249 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (310.950975ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:21:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-948249 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-948249
helpers_test.go:243: (dbg) docker inspect newest-cni-948249:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713",
	        "Created": "2025-11-24T14:20:58.322672048Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 206747,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:20:58.385565191Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/hostname",
	        "HostsPath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/hosts",
	        "LogPath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713-json.log",
	        "Name": "/newest-cni-948249",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-948249:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-948249",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713",
	                "LowerDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-948249",
	                "Source": "/var/lib/docker/volumes/newest-cni-948249/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-948249",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-948249",
	                "name.minikube.sigs.k8s.io": "newest-cni-948249",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "897e244c797d9c3cd4ace59bd49f1770c94ab4fdb4acd72b030d07228106b904",
	            "SandboxKey": "/var/run/docker/netns/897e244c797d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-948249": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:83:68:16:d8:f7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c3da6258d7ca1e0640d947578734878b1bdb58036b53baabf5783f672d1a649d",
	                    "EndpointID": "01253e05c77da2ecadc06fd88178ba93f78ccb6cccedc077b2964fe4737b9a15",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-948249",
	                        "772438acfd05"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948249 -n newest-cni-948249
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-948249 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-948249 logs -n 25: (1.136700419s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ delete  │ -p old-k8s-version-706771                                                                                                                                                                                                                     │ old-k8s-version-706771       │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:18 UTC │
	│ delete  │ -p cert-expiration-032076                                                                                                                                                                                                                     │ cert-expiration-032076       │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:17 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-444317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │                     │
	│ stop    │ -p no-preload-444317 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ addons  │ enable dashboard -p no-preload-444317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ stop    │ -p embed-certs-720293 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-720293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:20 UTC │
	│ image   │ no-preload-444317 image list --format=json                                                                                                                                                                                                    │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ pause   │ -p no-preload-444317 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p disable-driver-mounts-799392                                                                                                                                                                                                               │ disable-driver-mounts-799392 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ image   │ embed-certs-720293 image list --format=json                                                                                                                                                                                                   │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ pause   │ -p embed-certs-720293 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │                     │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-948249 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:20:52
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:20:52.748481  206362 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:20:52.748619  206362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:20:52.748625  206362 out.go:374] Setting ErrFile to fd 2...
	I1124 14:20:52.748629  206362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:20:52.748919  206362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:20:52.749332  206362 out.go:368] Setting JSON to false
	I1124 14:20:52.750288  206362 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7404,"bootTime":1763986649,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:20:52.750365  206362 start.go:143] virtualization:  
	I1124 14:20:52.754541  206362 out.go:179] * [newest-cni-948249] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:20:52.757939  206362 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:20:52.758000  206362 notify.go:221] Checking for updates...
	I1124 14:20:52.764559  206362 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:20:52.767502  206362 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:20:52.770638  206362 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:20:52.774264  206362 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:20:52.777231  206362 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:20:52.781858  206362 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:20:52.781988  206362 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:20:52.825066  206362 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:20:52.825261  206362 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:20:52.895672  206362 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:20:52.885537131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:20:52.895791  206362 docker.go:319] overlay module found
	I1124 14:20:52.900865  206362 out.go:179] * Using the docker driver based on user configuration
	I1124 14:20:52.903694  206362 start.go:309] selected driver: docker
	I1124 14:20:52.903717  206362 start.go:927] validating driver "docker" against <nil>
	I1124 14:20:52.903731  206362 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:20:52.904461  206362 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:20:52.970872  206362 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:20:52.961739928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:20:52.971036  206362 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1124 14:20:52.971069  206362 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1124 14:20:52.971296  206362 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:20:52.974213  206362 out.go:179] * Using Docker driver with root privileges
	I1124 14:20:52.976994  206362 cni.go:84] Creating CNI manager for ""
	I1124 14:20:52.977065  206362 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:20:52.977082  206362 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:20:52.977168  206362 start.go:353] cluster config:
	{Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:20:52.980196  206362 out.go:179] * Starting "newest-cni-948249" primary control-plane node in "newest-cni-948249" cluster
	I1124 14:20:52.983042  206362 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:20:52.986130  206362 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:20:52.988809  206362 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:20:52.988860  206362 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:20:52.988874  206362 cache.go:65] Caching tarball of preloaded images
	I1124 14:20:52.988880  206362 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:20:52.988972  206362 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:20:52.988983  206362 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:20:52.989095  206362 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/config.json ...
	I1124 14:20:52.989120  206362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/config.json: {Name:mk85119cd28fc543a9cae5729d6f3752a413ce18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:20:53.015276  206362 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:20:53.015301  206362 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:20:53.015323  206362 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:20:53.015389  206362 start.go:360] acquireMachinesLock for newest-cni-948249: {Name:mk494569275f434d30089868c4fe183eb1572641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:20:53.015511  206362 start.go:364] duration metric: took 96.05µs to acquireMachinesLock for "newest-cni-948249"
	I1124 14:20:53.015551  206362 start.go:93] Provisioning new machine with config: &{Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:20:53.015634  206362 start.go:125] createHost starting for "" (driver="docker")
	W1124 14:20:51.133438  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	W1124 14:20:53.632537  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	I1124 14:20:53.019271  206362 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:20:53.019588  206362 start.go:159] libmachine.API.Create for "newest-cni-948249" (driver="docker")
	I1124 14:20:53.019632  206362 client.go:173] LocalClient.Create starting
	I1124 14:20:53.019714  206362 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem
	I1124 14:20:53.019760  206362 main.go:143] libmachine: Decoding PEM data...
	I1124 14:20:53.019787  206362 main.go:143] libmachine: Parsing certificate...
	I1124 14:20:53.019860  206362 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem
	I1124 14:20:53.019888  206362 main.go:143] libmachine: Decoding PEM data...
	I1124 14:20:53.019901  206362 main.go:143] libmachine: Parsing certificate...
	I1124 14:20:53.020315  206362 cli_runner.go:164] Run: docker network inspect newest-cni-948249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:20:53.047706  206362 cli_runner.go:211] docker network inspect newest-cni-948249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:20:53.047810  206362 network_create.go:284] running [docker network inspect newest-cni-948249] to gather additional debugging logs...
	I1124 14:20:53.047831  206362 cli_runner.go:164] Run: docker network inspect newest-cni-948249
	W1124 14:20:53.065992  206362 cli_runner.go:211] docker network inspect newest-cni-948249 returned with exit code 1
	I1124 14:20:53.066032  206362 network_create.go:287] error running [docker network inspect newest-cni-948249]: docker network inspect newest-cni-948249: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-948249 not found
	I1124 14:20:53.066047  206362 network_create.go:289] output of [docker network inspect newest-cni-948249]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-948249 not found
	
	** /stderr **
	I1124 14:20:53.066174  206362 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:20:53.083943  206362 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3087ee9f269 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:07:60:94:e6:54} reservation:<nil>}
	I1124 14:20:53.084383  206362 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-87dca5a19352 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:6c:c1:85:45:94} reservation:<nil>}
	I1124 14:20:53.084742  206362 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9e995bd1b79e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:f1:73:f5:6f:cf} reservation:<nil>}
	I1124 14:20:53.084998  206362 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-13603eff9881 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:0b:69:f9:14:50} reservation:<nil>}
	I1124 14:20:53.085405  206362 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ab6880}
	I1124 14:20:53.085428  206362 network_create.go:124] attempt to create docker network newest-cni-948249 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 14:20:53.085491  206362 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-948249 newest-cni-948249
	I1124 14:20:53.152407  206362 network_create.go:108] docker network newest-cni-948249 192.168.85.0/24 created
	I1124 14:20:53.152440  206362 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-948249" container
	I1124 14:20:53.152514  206362 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:20:53.169817  206362 cli_runner.go:164] Run: docker volume create newest-cni-948249 --label name.minikube.sigs.k8s.io=newest-cni-948249 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:20:53.189351  206362 oci.go:103] Successfully created a docker volume newest-cni-948249
	I1124 14:20:53.189456  206362 cli_runner.go:164] Run: docker run --rm --name newest-cni-948249-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-948249 --entrypoint /usr/bin/test -v newest-cni-948249:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:20:53.753224  206362 oci.go:107] Successfully prepared a docker volume newest-cni-948249
	I1124 14:20:53.753287  206362 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:20:53.753298  206362 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:20:53.753376  206362 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-948249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1124 14:20:56.133131  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	W1124 14:20:58.634268  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	I1124 14:20:58.251262  206362 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-948249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.497852156s)
	I1124 14:20:58.251297  206362 kic.go:203] duration metric: took 4.497995813s to extract preloaded images to volume ...
	W1124 14:20:58.251482  206362 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 14:20:58.251589  206362 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:20:58.307484  206362 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-948249 --name newest-cni-948249 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-948249 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-948249 --network newest-cni-948249 --ip 192.168.85.2 --volume newest-cni-948249:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:20:58.640534  206362 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Running}}
	I1124 14:20:58.662592  206362 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:20:58.696657  206362 cli_runner.go:164] Run: docker exec newest-cni-948249 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:20:58.758696  206362 oci.go:144] the created container "newest-cni-948249" has a running status.
	I1124 14:20:58.758724  206362 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa...
	I1124 14:20:58.981434  206362 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:20:59.007542  206362 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:20:59.033224  206362 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:20:59.033244  206362 kic_runner.go:114] Args: [docker exec --privileged newest-cni-948249 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:20:59.088361  206362 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:20:59.112653  206362 machine.go:94] provisionDockerMachine start ...
	I1124 14:20:59.112748  206362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:20:59.145613  206362 main.go:143] libmachine: Using SSH client type: native
	I1124 14:20:59.145969  206362 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1124 14:20:59.145987  206362 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:20:59.146582  206362 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44174->127.0.0.1:33083: read: connection reset by peer
	I1124 14:21:02.299276  206362 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-948249
	
	I1124 14:21:02.299303  206362 ubuntu.go:182] provisioning hostname "newest-cni-948249"
	I1124 14:21:02.299406  206362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:02.317780  206362 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:02.318110  206362 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1124 14:21:02.318129  206362 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-948249 && echo "newest-cni-948249" | sudo tee /etc/hostname
	I1124 14:21:02.485113  206362 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-948249
	
	I1124 14:21:02.485212  206362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:02.509295  206362 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:02.509616  206362 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1124 14:21:02.509639  206362 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-948249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-948249/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-948249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:21:02.664100  206362 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:21:02.664131  206362 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:21:02.664172  206362 ubuntu.go:190] setting up certificates
	I1124 14:21:02.664184  206362 provision.go:84] configureAuth start
	I1124 14:21:02.664272  206362 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-948249
	I1124 14:21:02.686347  206362 provision.go:143] copyHostCerts
	I1124 14:21:02.686419  206362 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:21:02.686433  206362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:21:02.686541  206362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:21:02.686671  206362 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:21:02.686683  206362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:21:02.686718  206362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:21:02.686788  206362 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:21:02.686798  206362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:21:02.686827  206362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:21:02.686888  206362 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.newest-cni-948249 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-948249]
	W1124 14:21:00.634642  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	W1124 14:21:03.134075  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	I1124 14:21:02.754121  206362 provision.go:177] copyRemoteCerts
	I1124 14:21:02.754192  206362 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:21:02.754241  206362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:02.771855  206362 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:02.879659  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:21:02.898545  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 14:21:02.916652  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:21:02.935846  206362 provision.go:87] duration metric: took 271.61769ms to configureAuth
	I1124 14:21:02.935917  206362 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:21:02.936130  206362 config.go:182] Loaded profile config "newest-cni-948249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:02.936263  206362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:02.952970  206362 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:02.953285  206362 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1124 14:21:02.953299  206362 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:21:03.261064  206362 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:21:03.261156  206362 machine.go:97] duration metric: took 4.148483188s to provisionDockerMachine
	I1124 14:21:03.261190  206362 client.go:176] duration metric: took 10.241538574s to LocalClient.Create
	I1124 14:21:03.261241  206362 start.go:167] duration metric: took 10.241654612s to libmachine.API.Create "newest-cni-948249"
	I1124 14:21:03.261263  206362 start.go:293] postStartSetup for "newest-cni-948249" (driver="docker")
	I1124 14:21:03.261312  206362 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:21:03.261402  206362 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:21:03.261468  206362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:03.279322  206362 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:03.387910  206362 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:21:03.391531  206362 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:21:03.391557  206362 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:21:03.391569  206362 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:21:03.391623  206362 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:21:03.391711  206362 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:21:03.391820  206362 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:21:03.399538  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:21:03.419094  206362 start.go:296] duration metric: took 157.801241ms for postStartSetup
	I1124 14:21:03.419495  206362 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-948249
	I1124 14:21:03.436944  206362 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/config.json ...
	I1124 14:21:03.437228  206362 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:21:03.437281  206362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:03.456110  206362 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:03.560928  206362 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:21:03.565627  206362 start.go:128] duration metric: took 10.549972687s to createHost
	I1124 14:21:03.565651  206362 start.go:83] releasing machines lock for "newest-cni-948249", held for 10.550124172s
	I1124 14:21:03.565723  206362 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-948249
	I1124 14:21:03.582661  206362 ssh_runner.go:195] Run: cat /version.json
	I1124 14:21:03.582716  206362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:03.583948  206362 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:21:03.584029  206362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:03.599604  206362 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:03.610710  206362 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:03.703410  206362 ssh_runner.go:195] Run: systemctl --version
	I1124 14:21:03.798502  206362 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:21:03.843113  206362 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:21:03.847910  206362 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:21:03.847989  206362 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:21:03.878860  206362 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:21:03.878888  206362 start.go:496] detecting cgroup driver to use...
	I1124 14:21:03.878922  206362 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:21:03.878978  206362 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:21:03.900040  206362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:21:03.914981  206362 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:21:03.915051  206362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:21:03.936471  206362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:21:03.958382  206362 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:21:04.101608  206362 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:21:04.236760  206362 docker.go:234] disabling docker service ...
	I1124 14:21:04.236834  206362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:21:04.261510  206362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:21:04.277182  206362 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:21:04.411846  206362 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:21:04.547257  206362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:21:04.560548  206362 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:21:04.574414  206362 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:21:04.574522  206362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:04.584275  206362 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:21:04.584364  206362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:04.597166  206362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:04.606993  206362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:04.616617  206362 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:21:04.625551  206362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:04.636711  206362 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:04.651510  206362 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:04.660541  206362 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:21:04.668613  206362 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:21:04.676009  206362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:21:04.795767  206362 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:21:04.964747  206362 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:21:04.964869  206362 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:21:04.968659  206362 start.go:564] Will wait 60s for crictl version
	I1124 14:21:04.968765  206362 ssh_runner.go:195] Run: which crictl
	I1124 14:21:04.972355  206362 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:21:04.999321  206362 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:21:04.999564  206362 ssh_runner.go:195] Run: crio --version
	I1124 14:21:05.042789  206362 ssh_runner.go:195] Run: crio --version
	I1124 14:21:05.076515  206362 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:21:05.079514  206362 cli_runner.go:164] Run: docker network inspect newest-cni-948249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:21:05.097046  206362 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:21:05.100944  206362 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:21:05.113840  206362 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 14:21:05.116492  206362 kubeadm.go:884] updating cluster {Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:21:05.116661  206362 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:21:05.116740  206362 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:21:05.153552  206362 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:21:05.153579  206362 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:21:05.153637  206362 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:21:05.185203  206362 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:21:05.185227  206362 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:21:05.185236  206362 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 14:21:05.185319  206362 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-948249 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:21:05.185401  206362 ssh_runner.go:195] Run: crio config
	I1124 14:21:05.266274  206362 cni.go:84] Creating CNI manager for ""
	I1124 14:21:05.266297  206362 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:21:05.266317  206362 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 14:21:05.266343  206362 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-948249 NodeName:newest-cni-948249 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:21:05.266470  206362 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-948249"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:21:05.266552  206362 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:21:05.274659  206362 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:21:05.274769  206362 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:21:05.282495  206362 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 14:21:05.295958  206362 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:21:05.309372  206362 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1124 14:21:05.322881  206362 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:21:05.326763  206362 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:21:05.337070  206362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:21:05.458524  206362 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:21:05.475421  206362 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249 for IP: 192.168.85.2
	I1124 14:21:05.475483  206362 certs.go:195] generating shared ca certs ...
	I1124 14:21:05.475515  206362 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:05.475669  206362 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:21:05.475740  206362 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:21:05.475774  206362 certs.go:257] generating profile certs ...
	I1124 14:21:05.475844  206362 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/client.key
	I1124 14:21:05.475887  206362 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/client.crt with IP's: []
	I1124 14:21:05.524568  206362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/client.crt ...
	I1124 14:21:05.524604  206362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/client.crt: {Name:mk08ffb57d35315aaa764dc509df39950d10d1de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:05.524854  206362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/client.key ...
	I1124 14:21:05.524873  206362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/client.key: {Name:mkabcfed202f52272b5c26bb2ab281ed6abeeb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:05.525012  206362 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.key.dccfb6e0
	I1124 14:21:05.525032  206362 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.crt.dccfb6e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 14:21:05.725135  206362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.crt.dccfb6e0 ...
	I1124 14:21:05.725165  206362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.crt.dccfb6e0: {Name:mk0fffe21a70b3d2fdd6d6ca02f9294ab5051f8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:05.725383  206362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.key.dccfb6e0 ...
	I1124 14:21:05.725403  206362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.key.dccfb6e0: {Name:mkbfa989177df34ce23b089c553c54fc95b660dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:05.725520  206362 certs.go:382] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.crt.dccfb6e0 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.crt
	I1124 14:21:05.725614  206362 certs.go:386] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.key.dccfb6e0 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.key
	I1124 14:21:05.725682  206362 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.key
	I1124 14:21:05.725703  206362 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.crt with IP's: []
	I1124 14:21:05.858845  206362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.crt ...
	I1124 14:21:05.858875  206362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.crt: {Name:mkbb1e507d6c107824ee1dd376a35ddb95080d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:05.859058  206362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.key ...
	I1124 14:21:05.859075  206362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.key: {Name:mkb46b0c6efad5ca12e7ce01466656f7348c11eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:05.859267  206362 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:21:05.859314  206362 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:21:05.859330  206362 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:21:05.859380  206362 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:21:05.859413  206362 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:21:05.859444  206362 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:21:05.859504  206362 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:21:05.860128  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:21:05.880996  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:21:05.899522  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:21:05.920412  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:21:05.939274  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 14:21:05.957540  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:21:05.976574  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:21:05.995970  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:21:06.018652  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:21:06.041432  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:21:06.064678  206362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:21:06.085615  206362 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:21:06.099753  206362 ssh_runner.go:195] Run: openssl version
	I1124 14:21:06.106259  206362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:21:06.115339  206362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:21:06.120043  206362 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:21:06.120178  206362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:21:06.165532  206362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:21:06.174229  206362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:21:06.183933  206362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:21:06.188071  206362 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:21:06.188187  206362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:21:06.232481  206362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:21:06.241241  206362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:21:06.250248  206362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:21:06.254424  206362 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:21:06.254530  206362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:21:06.295934  206362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:21:06.304272  206362 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:21:06.307801  206362 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:21:06.307854  206362 kubeadm.go:401] StartCluster: {Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:21:06.307938  206362 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:21:06.307995  206362 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:21:06.335644  206362 cri.go:89] found id: ""
	I1124 14:21:06.335767  206362 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:21:06.343858  206362 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:21:06.352113  206362 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:21:06.352224  206362 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:21:06.360080  206362 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:21:06.360100  206362 kubeadm.go:158] found existing configuration files:
	
	I1124 14:21:06.360150  206362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:21:06.367924  206362 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:21:06.368011  206362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:21:06.375494  206362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:21:06.383144  206362 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:21:06.383252  206362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:21:06.390645  206362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:21:06.398395  206362 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:21:06.398458  206362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:21:06.405708  206362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:21:06.413574  206362 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:21:06.413685  206362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:21:06.421455  206362 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:21:06.461582  206362 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:21:06.461749  206362 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:21:06.489852  206362 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:21:06.489982  206362 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:21:06.490050  206362 kubeadm.go:319] OS: Linux
	I1124 14:21:06.490114  206362 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:21:06.490178  206362 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:21:06.490253  206362 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:21:06.490355  206362 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:21:06.490421  206362 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:21:06.490500  206362 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:21:06.490587  206362 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:21:06.490661  206362 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:21:06.490739  206362 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:21:06.572290  206362 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:21:06.572475  206362 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:21:06.572590  206362 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:21:06.580612  206362 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:21:06.586003  206362 out.go:252]   - Generating certificates and keys ...
	I1124 14:21:06.586117  206362 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:21:06.586191  206362 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:21:07.003190  206362 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:21:07.603594  206362 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1124 14:21:05.633531  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	W1124 14:21:07.633876  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	I1124 14:21:07.814083  206362 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:21:07.940106  206362 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:21:08.189223  206362 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:21:08.189543  206362 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-948249] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:21:08.386604  206362 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:21:08.386947  206362 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-948249] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:21:08.778310  206362 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:21:09.607388  206362 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:21:09.951600  206362 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:21:09.951921  206362 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:21:10.403155  206362 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:21:10.825929  206362 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:21:11.411530  206362 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:21:11.765287  206362 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:21:12.441441  206362 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:21:12.442390  206362 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:21:12.445389  206362 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:21:12.448766  206362 out.go:252]   - Booting up control plane ...
	I1124 14:21:12.448872  206362 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:21:12.448958  206362 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:21:12.450697  206362 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:21:12.470706  206362 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:21:12.470831  206362 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:21:12.478184  206362 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:21:12.478989  206362 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:21:12.479043  206362 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:21:12.613961  206362 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:21:12.614082  206362 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1124 14:21:10.133842  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	W1124 14:21:12.632771  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	I1124 14:21:13.611909  206362 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001467004s
	I1124 14:21:13.615569  206362 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:21:13.615666  206362 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 14:21:13.615982  206362 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:21:13.616069  206362 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1124 14:21:14.633082  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	W1124 14:21:16.633383  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	W1124 14:21:19.132995  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	I1124 14:21:18.652551  206362 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.036415575s
	I1124 14:21:18.786324  206362 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.170775305s
	I1124 14:21:20.617080  206362 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001467739s
	I1124 14:21:20.639857  206362 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:21:20.656682  206362 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:21:20.672154  206362 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:21:20.672366  206362 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-948249 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:21:20.688281  206362 kubeadm.go:319] [bootstrap-token] Using token: ykqk1e.ortzi4p02rc3bq9p
	I1124 14:21:20.691116  206362 out.go:252]   - Configuring RBAC rules ...
	I1124 14:21:20.691250  206362 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:21:20.696237  206362 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:21:20.708469  206362 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:21:20.713035  206362 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:21:20.719529  206362 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:21:20.723954  206362 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:21:21.026245  206362 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:21:21.456386  206362 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:21:22.028135  206362 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:21:22.029558  206362 kubeadm.go:319] 
	I1124 14:21:22.029637  206362 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:21:22.029647  206362 kubeadm.go:319] 
	I1124 14:21:22.029748  206362 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:21:22.029766  206362 kubeadm.go:319] 
	I1124 14:21:22.029800  206362 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:21:22.029868  206362 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:21:22.029933  206362 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:21:22.029942  206362 kubeadm.go:319] 
	I1124 14:21:22.030002  206362 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:21:22.030009  206362 kubeadm.go:319] 
	I1124 14:21:22.030058  206362 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:21:22.030065  206362 kubeadm.go:319] 
	I1124 14:21:22.030117  206362 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:21:22.030195  206362 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:21:22.030266  206362 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:21:22.030277  206362 kubeadm.go:319] 
	I1124 14:21:22.030362  206362 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:21:22.030442  206362 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:21:22.030449  206362 kubeadm.go:319] 
	I1124 14:21:22.030534  206362 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ykqk1e.ortzi4p02rc3bq9p \
	I1124 14:21:22.030644  206362 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f \
	I1124 14:21:22.030672  206362 kubeadm.go:319] 	--control-plane 
	I1124 14:21:22.030680  206362 kubeadm.go:319] 
	I1124 14:21:22.030766  206362 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:21:22.030773  206362 kubeadm.go:319] 
	I1124 14:21:22.030856  206362 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ykqk1e.ortzi4p02rc3bq9p \
	I1124 14:21:22.030966  206362 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f 
	I1124 14:21:22.036351  206362 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:21:22.036594  206362 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:21:22.036707  206362 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:21:22.036729  206362 cni.go:84] Creating CNI manager for ""
	I1124 14:21:22.036744  206362 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:21:22.039954  206362 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:21:22.042936  206362 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:21:22.048436  206362 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:21:22.048462  206362 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:21:22.063094  206362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:21:22.402611  206362 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:21:22.402751  206362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:21:22.402821  206362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-948249 minikube.k8s.io/updated_at=2025_11_24T14_21_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=newest-cni-948249 minikube.k8s.io/primary=true
	I1124 14:21:22.621024  206362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:21:22.621129  206362 ops.go:34] apiserver oom_adj: -16
	W1124 14:21:21.633392  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	W1124 14:21:24.133246  202335 node_ready.go:57] node "default-k8s-diff-port-152851" has "Ready":"False" status (will retry)
	I1124 14:21:23.121133  206362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:21:23.621712  206362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:21:24.121439  206362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:21:24.621371  206362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:21:25.121665  206362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:21:25.621120  206362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:21:26.121115  206362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:21:26.223991  206362 kubeadm.go:1114] duration metric: took 3.82128294s to wait for elevateKubeSystemPrivileges
	I1124 14:21:26.224104  206362 kubeadm.go:403] duration metric: took 19.916230015s to StartCluster
	I1124 14:21:26.224140  206362 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:26.224245  206362 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:21:26.225221  206362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:26.225539  206362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:21:26.225791  206362 config.go:182] Loaded profile config "newest-cni-948249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:26.225891  206362 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:21:26.225959  206362 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-948249"
	I1124 14:21:26.225974  206362 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-948249"
	I1124 14:21:26.225994  206362 host.go:66] Checking if "newest-cni-948249" exists ...
	I1124 14:21:26.226468  206362 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:26.225865  206362 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:21:26.227161  206362 addons.go:70] Setting default-storageclass=true in profile "newest-cni-948249"
	I1124 14:21:26.227186  206362 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-948249"
	I1124 14:21:26.227482  206362 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:26.231570  206362 out.go:179] * Verifying Kubernetes components...
	I1124 14:21:26.234952  206362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:21:26.262500  206362 addons.go:239] Setting addon default-storageclass=true in "newest-cni-948249"
	I1124 14:21:26.262540  206362 host.go:66] Checking if "newest-cni-948249" exists ...
	I1124 14:21:26.262952  206362 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:26.281426  206362 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:21:26.284325  206362 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:21:26.284347  206362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:21:26.284439  206362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:26.298429  206362 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:21:26.298450  206362 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:21:26.298510  206362 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:26.335628  206362 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:26.347705  206362 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:26.590037  206362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:21:26.590128  206362 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:21:26.617081  206362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:21:26.756866  206362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:21:27.371474  206362 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:21:27.371586  206362 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:21:27.375827  206362 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 14:21:27.669138  206362 api_server.go:72] duration metric: took 1.442436903s to wait for apiserver process to appear ...
	I1124 14:21:27.669163  206362 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:21:27.669180  206362 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:21:27.672128  206362 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 14:21:27.675870  206362 addons.go:530] duration metric: took 1.44996979s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 14:21:27.684037  206362 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 14:21:27.685313  206362 api_server.go:141] control plane version: v1.34.1
	I1124 14:21:27.685334  206362 api_server.go:131] duration metric: took 16.164459ms to wait for apiserver health ...
	I1124 14:21:27.685362  206362 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:21:27.692176  206362 system_pods.go:59] 8 kube-system pods found
	I1124 14:21:27.692214  206362 system_pods.go:61] "coredns-66bc5c9577-6rv2z" [f569a6bf-bdcc-4176-8cb8-3bb68921e2da] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:21:27.692247  206362 system_pods.go:61] "etcd-newest-cni-948249" [963d5d58-180c-49d7-81e1-23b0a458bf9b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:21:27.692260  206362 system_pods.go:61] "kindnet-gtj2g" [e153411a-2f4b-4151-b83b-19611f170cfb] Running
	I1124 14:21:27.692268  206362 system_pods.go:61] "kube-apiserver-newest-cni-948249" [a30eecaf-1cbf-4072-a6aa-0c069801cc74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:21:27.692276  206362 system_pods.go:61] "kube-controller-manager-newest-cni-948249" [edd42557-6e50-47eb-90fb-5d1bc56a8943] Running
	I1124 14:21:27.692292  206362 system_pods.go:61] "kube-proxy-tsnk9" [2cd4d95f-1e99-425c-948b-1ee004fea3ac] Running
	I1124 14:21:27.692302  206362 system_pods.go:61] "kube-scheduler-newest-cni-948249" [139bbe9e-626b-4937-a3a7-1929a3c43762] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:21:27.692322  206362 system_pods.go:61] "storage-provisioner" [c81e4590-cbb7-4278-bd3f-74f5be196395] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:21:27.692337  206362 system_pods.go:74] duration metric: took 6.960417ms to wait for pod list to return data ...
	I1124 14:21:27.692346  206362 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:21:27.700791  206362 default_sa.go:45] found service account: "default"
	I1124 14:21:27.700818  206362 default_sa.go:55] duration metric: took 8.465507ms for default service account to be created ...
	I1124 14:21:27.700850  206362 kubeadm.go:587] duration metric: took 1.474136219s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:21:27.700874  206362 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:21:27.703802  206362 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:21:27.703840  206362 node_conditions.go:123] node cpu capacity is 2
	I1124 14:21:27.703855  206362 node_conditions.go:105] duration metric: took 2.97425ms to run NodePressure ...
	I1124 14:21:27.703867  206362 start.go:242] waiting for startup goroutines ...
	I1124 14:21:27.881127  206362 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-948249" context rescaled to 1 replicas
	I1124 14:21:27.881200  206362 start.go:247] waiting for cluster config update ...
	I1124 14:21:27.881227  206362 start.go:256] writing updated cluster config ...
	I1124 14:21:27.881555  206362 ssh_runner.go:195] Run: rm -f paused
	I1124 14:21:27.949212  206362 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:21:27.952199  206362 out.go:179] * Done! kubectl is now configured to use "newest-cni-948249" cluster and "default" namespace by default
	I1124 14:21:25.147875  202335 node_ready.go:49] node "default-k8s-diff-port-152851" is "Ready"
	I1124 14:21:25.147899  202335 node_ready.go:38] duration metric: took 40.018213421s for node "default-k8s-diff-port-152851" to be "Ready" ...
	I1124 14:21:25.147913  202335 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:21:25.147969  202335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:21:25.164344  202335 api_server.go:72] duration metric: took 42.011923441s to wait for apiserver process to appear ...
	I1124 14:21:25.164417  202335 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:21:25.164467  202335 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 14:21:25.176440  202335 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 14:21:25.177721  202335 api_server.go:141] control plane version: v1.34.1
	I1124 14:21:25.177744  202335 api_server.go:131] duration metric: took 13.291518ms to wait for apiserver health ...
	I1124 14:21:25.177753  202335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:21:25.181434  202335 system_pods.go:59] 8 kube-system pods found
	I1124 14:21:25.181513  202335 system_pods.go:61] "coredns-66bc5c9577-qnfqn" [386494d3-c6d0-46da-898f-5936bcc3bb40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:21:25.181538  202335 system_pods.go:61] "etcd-default-k8s-diff-port-152851" [73849492-289b-4e8a-b132-076ac817ec77] Running
	I1124 14:21:25.181578  202335 system_pods.go:61] "kindnet-4j292" [b23f3231-3c24-4e8a-bb05-74e475601643] Running
	I1124 14:21:25.181603  202335 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-152851" [43967435-4b2a-4555-879f-03c39fe3874a] Running
	I1124 14:21:25.181624  202335 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-152851" [088e57fd-b148-4545-93c8-e115d7ce1c9e] Running
	I1124 14:21:25.181659  202335 system_pods.go:61] "kube-proxy-m92jb" [118788fe-af1a-46f0-8ff3-7c4a381d36fd] Running
	I1124 14:21:25.181684  202335 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-152851" [59ad535f-afc5-418d-af06-b88121856fc9] Running
	I1124 14:21:25.181704  202335 system_pods.go:61] "storage-provisioner" [21b060b9-5567-4a41-8e79-351855fb6f30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:21:25.181742  202335 system_pods.go:74] duration metric: took 3.982819ms to wait for pod list to return data ...
	I1124 14:21:25.181768  202335 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:21:25.187884  202335 default_sa.go:45] found service account: "default"
	I1124 14:21:25.187956  202335 default_sa.go:55] duration metric: took 6.168589ms for default service account to be created ...
	I1124 14:21:25.187980  202335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:21:25.205201  202335 system_pods.go:86] 8 kube-system pods found
	I1124 14:21:25.205281  202335 system_pods.go:89] "coredns-66bc5c9577-qnfqn" [386494d3-c6d0-46da-898f-5936bcc3bb40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:21:25.205322  202335 system_pods.go:89] "etcd-default-k8s-diff-port-152851" [73849492-289b-4e8a-b132-076ac817ec77] Running
	I1124 14:21:25.205344  202335 system_pods.go:89] "kindnet-4j292" [b23f3231-3c24-4e8a-bb05-74e475601643] Running
	I1124 14:21:25.205379  202335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-152851" [43967435-4b2a-4555-879f-03c39fe3874a] Running
	I1124 14:21:25.205407  202335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-152851" [088e57fd-b148-4545-93c8-e115d7ce1c9e] Running
	I1124 14:21:25.205430  202335 system_pods.go:89] "kube-proxy-m92jb" [118788fe-af1a-46f0-8ff3-7c4a381d36fd] Running
	I1124 14:21:25.205476  202335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-152851" [59ad535f-afc5-418d-af06-b88121856fc9] Running
	I1124 14:21:25.205506  202335 system_pods.go:89] "storage-provisioner" [21b060b9-5567-4a41-8e79-351855fb6f30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:21:25.205560  202335 retry.go:31] will retry after 202.836034ms: missing components: kube-dns
	I1124 14:21:25.412443  202335 system_pods.go:86] 8 kube-system pods found
	I1124 14:21:25.412477  202335 system_pods.go:89] "coredns-66bc5c9577-qnfqn" [386494d3-c6d0-46da-898f-5936bcc3bb40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:21:25.412484  202335 system_pods.go:89] "etcd-default-k8s-diff-port-152851" [73849492-289b-4e8a-b132-076ac817ec77] Running
	I1124 14:21:25.412491  202335 system_pods.go:89] "kindnet-4j292" [b23f3231-3c24-4e8a-bb05-74e475601643] Running
	I1124 14:21:25.412495  202335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-152851" [43967435-4b2a-4555-879f-03c39fe3874a] Running
	I1124 14:21:25.412500  202335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-152851" [088e57fd-b148-4545-93c8-e115d7ce1c9e] Running
	I1124 14:21:25.412504  202335 system_pods.go:89] "kube-proxy-m92jb" [118788fe-af1a-46f0-8ff3-7c4a381d36fd] Running
	I1124 14:21:25.412509  202335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-152851" [59ad535f-afc5-418d-af06-b88121856fc9] Running
	I1124 14:21:25.412514  202335 system_pods.go:89] "storage-provisioner" [21b060b9-5567-4a41-8e79-351855fb6f30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:21:25.412529  202335 retry.go:31] will retry after 305.033609ms: missing components: kube-dns
	I1124 14:21:25.721830  202335 system_pods.go:86] 8 kube-system pods found
	I1124 14:21:25.721867  202335 system_pods.go:89] "coredns-66bc5c9577-qnfqn" [386494d3-c6d0-46da-898f-5936bcc3bb40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:21:25.721875  202335 system_pods.go:89] "etcd-default-k8s-diff-port-152851" [73849492-289b-4e8a-b132-076ac817ec77] Running
	I1124 14:21:25.721883  202335 system_pods.go:89] "kindnet-4j292" [b23f3231-3c24-4e8a-bb05-74e475601643] Running
	I1124 14:21:25.721888  202335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-152851" [43967435-4b2a-4555-879f-03c39fe3874a] Running
	I1124 14:21:25.721893  202335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-152851" [088e57fd-b148-4545-93c8-e115d7ce1c9e] Running
	I1124 14:21:25.721900  202335 system_pods.go:89] "kube-proxy-m92jb" [118788fe-af1a-46f0-8ff3-7c4a381d36fd] Running
	I1124 14:21:25.721909  202335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-152851" [59ad535f-afc5-418d-af06-b88121856fc9] Running
	I1124 14:21:25.721918  202335 system_pods.go:89] "storage-provisioner" [21b060b9-5567-4a41-8e79-351855fb6f30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:21:25.721938  202335 retry.go:31] will retry after 319.256914ms: missing components: kube-dns
	I1124 14:21:26.044777  202335 system_pods.go:86] 8 kube-system pods found
	I1124 14:21:26.044812  202335 system_pods.go:89] "coredns-66bc5c9577-qnfqn" [386494d3-c6d0-46da-898f-5936bcc3bb40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:21:26.044820  202335 system_pods.go:89] "etcd-default-k8s-diff-port-152851" [73849492-289b-4e8a-b132-076ac817ec77] Running
	I1124 14:21:26.044828  202335 system_pods.go:89] "kindnet-4j292" [b23f3231-3c24-4e8a-bb05-74e475601643] Running
	I1124 14:21:26.044833  202335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-152851" [43967435-4b2a-4555-879f-03c39fe3874a] Running
	I1124 14:21:26.044837  202335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-152851" [088e57fd-b148-4545-93c8-e115d7ce1c9e] Running
	I1124 14:21:26.044841  202335 system_pods.go:89] "kube-proxy-m92jb" [118788fe-af1a-46f0-8ff3-7c4a381d36fd] Running
	I1124 14:21:26.044846  202335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-152851" [59ad535f-afc5-418d-af06-b88121856fc9] Running
	I1124 14:21:26.044852  202335 system_pods.go:89] "storage-provisioner" [21b060b9-5567-4a41-8e79-351855fb6f30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:21:26.044876  202335 retry.go:31] will retry after 608.41375ms: missing components: kube-dns
	I1124 14:21:26.657062  202335 system_pods.go:86] 8 kube-system pods found
	I1124 14:21:26.657093  202335 system_pods.go:89] "coredns-66bc5c9577-qnfqn" [386494d3-c6d0-46da-898f-5936bcc3bb40] Running
	I1124 14:21:26.657100  202335 system_pods.go:89] "etcd-default-k8s-diff-port-152851" [73849492-289b-4e8a-b132-076ac817ec77] Running
	I1124 14:21:26.657105  202335 system_pods.go:89] "kindnet-4j292" [b23f3231-3c24-4e8a-bb05-74e475601643] Running
	I1124 14:21:26.657110  202335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-152851" [43967435-4b2a-4555-879f-03c39fe3874a] Running
	I1124 14:21:26.657116  202335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-152851" [088e57fd-b148-4545-93c8-e115d7ce1c9e] Running
	I1124 14:21:26.657121  202335 system_pods.go:89] "kube-proxy-m92jb" [118788fe-af1a-46f0-8ff3-7c4a381d36fd] Running
	I1124 14:21:26.657126  202335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-152851" [59ad535f-afc5-418d-af06-b88121856fc9] Running
	I1124 14:21:26.657129  202335 system_pods.go:89] "storage-provisioner" [21b060b9-5567-4a41-8e79-351855fb6f30] Running
	I1124 14:21:26.657137  202335 system_pods.go:126] duration metric: took 1.469137917s to wait for k8s-apps to be running ...
	I1124 14:21:26.657145  202335 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:21:26.657201  202335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:21:26.673247  202335 system_svc.go:56] duration metric: took 16.093082ms WaitForService to wait for kubelet
	I1124 14:21:26.673274  202335 kubeadm.go:587] duration metric: took 43.520857636s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:21:26.673290  202335 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:21:26.676352  202335 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:21:26.676432  202335 node_conditions.go:123] node cpu capacity is 2
	I1124 14:21:26.676469  202335 node_conditions.go:105] duration metric: took 3.172972ms to run NodePressure ...
	I1124 14:21:26.676499  202335 start.go:242] waiting for startup goroutines ...
	I1124 14:21:26.676530  202335 start.go:247] waiting for cluster config update ...
	I1124 14:21:26.676556  202335 start.go:256] writing updated cluster config ...
	I1124 14:21:26.676922  202335 ssh_runner.go:195] Run: rm -f paused
	I1124 14:21:26.683891  202335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:21:26.687199  202335 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qnfqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:26.694256  202335 pod_ready.go:94] pod "coredns-66bc5c9577-qnfqn" is "Ready"
	I1124 14:21:26.694328  202335 pod_ready.go:86] duration metric: took 7.107708ms for pod "coredns-66bc5c9577-qnfqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:26.697897  202335 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:26.704456  202335 pod_ready.go:94] pod "etcd-default-k8s-diff-port-152851" is "Ready"
	I1124 14:21:26.704531  202335 pod_ready.go:86] duration metric: took 6.558255ms for pod "etcd-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:26.707708  202335 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:26.714487  202335 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-152851" is "Ready"
	I1124 14:21:26.714559  202335 pod_ready.go:86] duration metric: took 6.773658ms for pod "kube-apiserver-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:26.717395  202335 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:27.088227  202335 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-152851" is "Ready"
	I1124 14:21:27.088260  202335 pod_ready.go:86] duration metric: took 370.794074ms for pod "kube-controller-manager-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:27.289044  202335 pod_ready.go:83] waiting for pod "kube-proxy-m92jb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:27.688368  202335 pod_ready.go:94] pod "kube-proxy-m92jb" is "Ready"
	I1124 14:21:27.688395  202335 pod_ready.go:86] duration metric: took 399.321984ms for pod "kube-proxy-m92jb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:27.895715  202335 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:28.289164  202335 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-152851" is "Ready"
	I1124 14:21:28.289188  202335 pod_ready.go:86] duration metric: took 393.4498ms for pod "kube-scheduler-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:21:28.289201  202335 pod_ready.go:40] duration metric: took 1.605279346s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:21:28.373678  202335 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:21:28.378767  202335 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-152851" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.194813868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.20049012Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-tsnk9/POD" id=81c1cb11-79a9-4fc9-90f1-ea490643c1e1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.200558264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.214740557Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=81c1cb11-79a9-4fc9-90f1-ea490643c1e1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.219630577Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=17d01d11-f59f-468d-a816-4bdd4df1a1ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.223074099Z" level=info msg="Ran pod sandbox a6d2f30ad11fdcd721324983e9c4529e462eb17e096d9248b124b3e3cd2cb351 with infra container: kube-system/kube-proxy-tsnk9/POD" id=81c1cb11-79a9-4fc9-90f1-ea490643c1e1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.225909001Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=243332de-8397-4ab8-beb9-56e0ae44acf3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.235140374Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=cf05c463-a627-4f39-9972-843581bc6979 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.243738174Z" level=info msg="Creating container: kube-system/kube-proxy-tsnk9/kube-proxy" id=62b76d9f-b7bf-426a-a8e5-01ae83ae9bbf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.244273318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.244008511Z" level=info msg="Ran pod sandbox 0d1147dfee83742c59a6b0cbf3de182b5cf94d1f4cbe9208ce3fcba7a7a90cf3 with infra container: kube-system/kindnet-gtj2g/POD" id=17d01d11-f59f-468d-a816-4bdd4df1a1ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.246378046Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a120f0d3-6116-414a-b877-1383125cc70d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.250572871Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1d418c5e-8306-42cb-b375-757a23bf1bbf name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.25948219Z" level=info msg="Creating container: kube-system/kindnet-gtj2g/kindnet-cni" id=35021fab-2b50-4583-9826-2c5c33a83d45 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.259638852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.265267267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.268477171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.269495488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.271590951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.31232122Z" level=info msg="Created container 1fa8c1ace3041dfbe654099ffff78a2b90f66c119564389c63c977434a9a8601: kube-system/kube-proxy-tsnk9/kube-proxy" id=62b76d9f-b7bf-426a-a8e5-01ae83ae9bbf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.312747259Z" level=info msg="Created container bc5a80ca04053093334ab1ca8057f552e0b4bed6a7dc8d068b9a1ea83679aa64: kube-system/kindnet-gtj2g/kindnet-cni" id=35021fab-2b50-4583-9826-2c5c33a83d45 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.313675983Z" level=info msg="Starting container: 1fa8c1ace3041dfbe654099ffff78a2b90f66c119564389c63c977434a9a8601" id=b130ad0c-0165-4629-94a7-5be40ed9a006 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.314096434Z" level=info msg="Starting container: bc5a80ca04053093334ab1ca8057f552e0b4bed6a7dc8d068b9a1ea83679aa64" id=7bf1a273-0eb2-464a-a331-ed2bc2d55de4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.327663976Z" level=info msg="Started container" PID=1475 containerID=bc5a80ca04053093334ab1ca8057f552e0b4bed6a7dc8d068b9a1ea83679aa64 description=kube-system/kindnet-gtj2g/kindnet-cni id=7bf1a273-0eb2-464a-a331-ed2bc2d55de4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d1147dfee83742c59a6b0cbf3de182b5cf94d1f4cbe9208ce3fcba7a7a90cf3
	Nov 24 14:21:27 newest-cni-948249 crio[839]: time="2025-11-24T14:21:27.329839252Z" level=info msg="Started container" PID=1479 containerID=1fa8c1ace3041dfbe654099ffff78a2b90f66c119564389c63c977434a9a8601 description=kube-system/kube-proxy-tsnk9/kube-proxy id=b130ad0c-0165-4629-94a7-5be40ed9a006 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6d2f30ad11fdcd721324983e9c4529e462eb17e096d9248b124b3e3cd2cb351
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bc5a80ca04053       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   0d1147dfee837       kindnet-gtj2g                               kube-system
	1fa8c1ace3041       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   a6d2f30ad11fd       kube-proxy-tsnk9                            kube-system
	cdd32f137b979       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   783aca2249b99       kube-controller-manager-newest-cni-948249   kube-system
	2a7ca9a5ca139       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   f69eb89581d5b       kube-scheduler-newest-cni-948249            kube-system
	462ccb2e8803b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   20bb041fb4a4a       etcd-newest-cni-948249                      kube-system
	be0a9f0dc38dc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   025d3bc5cb553       kube-apiserver-newest-cni-948249            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-948249
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-948249
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=newest-cni-948249
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_21_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:21:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-948249
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:21:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:21:21 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:21:21 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:21:21 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 14:21:21 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-948249
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                20c80147-87d0-4669-a827-37cbb2c6caf8
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-948249                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-gtj2g                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-948249             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-948249    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-tsnk9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-948249             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-948249 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-948249 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-948249 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-948249 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-948249 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-948249 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-948249 event: Registered Node newest-cni-948249 in Controller
	
	
	==> dmesg <==
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	[ +49.916713] overlayfs: idmapped layers are currently not supported
	[Nov24 14:19] overlayfs: idmapped layers are currently not supported
	[Nov24 14:20] overlayfs: idmapped layers are currently not supported
	[Nov24 14:21] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [462ccb2e8803bd93e4e2795b2a10b93d6d9d4987cacf2e783b97c6c1bedd50c0] <==
	{"level":"warn","ts":"2025-11-24T14:21:16.848705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:16.888542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:16.917444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:16.946299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:16.987444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.001656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.028870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.071141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.091268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.134116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.146186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.168923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.187583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.210871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.240646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.247216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.271076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.284115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.304627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.323742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.340926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.372588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.386371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.408489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:17.512517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59610","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:21:29 up  2:04,  0 user,  load average: 2.63, 2.85, 2.52
	Linux newest-cni-948249 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bc5a80ca04053093334ab1ca8057f552e0b4bed6a7dc8d068b9a1ea83679aa64] <==
	I1124 14:21:27.513731       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:21:27.513988       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:21:27.514102       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:21:27.514113       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:21:27.514123       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:21:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:21:27.714727       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:21:27.714757       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:21:27.714766       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:21:27.714871       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [be0a9f0dc38dc960758a63c980a71736396dacdfe6047478853499b080853867] <==
	I1124 14:21:18.776479       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 14:21:18.776955       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 14:21:18.777210       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:21:18.803452       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:21:18.803608       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:21:18.803954       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 14:21:18.830042       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:21:18.830610       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:21:19.373586       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:21:19.380592       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:21:19.380615       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:21:20.214993       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:21:20.271646       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:21:20.403126       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:21:20.410825       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 14:21:20.412035       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:21:20.417397       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:21:20.754634       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:21:21.438123       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:21:21.454701       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:21:21.470235       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:21:26.473000       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:21:26.482893       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:21:26.721626       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:21:26.783452       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [cdd32f137b9795cdc6b2f7f980acfe072200a7be7cae23992ba1839cdce48913] <==
	I1124 14:21:25.800578       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:21:25.800629       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 14:21:25.800664       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:21:25.800665       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 14:21:25.800731       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:21:25.800830       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:21:25.800991       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:21:25.801698       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 14:21:25.804333       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:21:25.804387       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 14:21:25.804442       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:21:25.805624       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:21:25.806826       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 14:21:25.806885       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 14:21:25.806971       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:21:25.806984       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:21:25.806991       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:21:25.820240       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 14:21:25.820639       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:21:25.820753       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:21:25.826476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-948249" podCIDRs=["10.42.0.0/24"]
	I1124 14:21:25.851539       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:21:25.851569       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:21:25.851576       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:21:25.867562       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [1fa8c1ace3041dfbe654099ffff78a2b90f66c119564389c63c977434a9a8601] <==
	I1124 14:21:27.566944       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:21:27.689949       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:21:27.792086       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:21:27.792215       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:21:27.792299       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:21:27.832258       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:21:27.832320       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:21:27.836435       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:21:27.836767       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:21:27.836789       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:21:27.840857       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:21:27.840933       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:21:27.840993       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:21:27.841011       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:21:27.841272       1 config.go:200] "Starting service config controller"
	I1124 14:21:27.841312       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:21:27.841539       1 config.go:309] "Starting node config controller"
	I1124 14:21:27.841581       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:21:27.841609       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:21:27.941111       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:21:27.941213       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 14:21:27.942324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2a7ca9a5ca139e9fa74f8f59b7bba82374233bb31a656177cd9c066d4f8212df] <==
	E1124 14:21:18.784031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:21:18.784167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:21:18.784243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 14:21:18.790938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 14:21:18.791116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:21:18.791208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:21:18.791298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:21:18.791400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:21:18.791561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 14:21:18.794873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 14:21:18.795000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 14:21:18.795539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:21:18.795669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 14:21:18.795745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 14:21:18.795821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:21:19.593243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:21:19.619337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:21:19.652765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:21:19.672302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:21:19.693185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 14:21:19.713628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:21:19.746881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 14:21:19.801972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 14:21:19.863912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1124 14:21:20.356039       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:21:22 newest-cni-948249 kubelet[1296]: I1124 14:21:22.326160    1296 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 14:21:22 newest-cni-948249 kubelet[1296]: I1124 14:21:22.421196    1296 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-948249"
	Nov 24 14:21:22 newest-cni-948249 kubelet[1296]: I1124 14:21:22.421529    1296 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-948249"
	Nov 24 14:21:22 newest-cni-948249 kubelet[1296]: I1124 14:21:22.421675    1296 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-948249"
	Nov 24 14:21:22 newest-cni-948249 kubelet[1296]: E1124 14:21:22.444837    1296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-948249\" already exists" pod="kube-system/kube-apiserver-newest-cni-948249"
	Nov 24 14:21:22 newest-cni-948249 kubelet[1296]: E1124 14:21:22.458483    1296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-948249\" already exists" pod="kube-system/kube-scheduler-newest-cni-948249"
	Nov 24 14:21:22 newest-cni-948249 kubelet[1296]: E1124 14:21:22.465594    1296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-948249\" already exists" pod="kube-system/etcd-newest-cni-948249"
	Nov 24 14:21:22 newest-cni-948249 kubelet[1296]: I1124 14:21:22.491001    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-948249" podStartSLOduration=1.4909797089999999 podStartE2EDuration="1.490979709s" podCreationTimestamp="2025-11-24 14:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:21:22.465978703 +0000 UTC m=+1.261071966" watchObservedRunningTime="2025-11-24 14:21:22.490979709 +0000 UTC m=+1.286072989"
	Nov 24 14:21:22 newest-cni-948249 kubelet[1296]: I1124 14:21:22.491210    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-948249" podStartSLOduration=1.491201874 podStartE2EDuration="1.491201874s" podCreationTimestamp="2025-11-24 14:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:21:22.490627041 +0000 UTC m=+1.285720312" watchObservedRunningTime="2025-11-24 14:21:22.491201874 +0000 UTC m=+1.286295137"
	Nov 24 14:21:22 newest-cni-948249 kubelet[1296]: I1124 14:21:22.547317    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-948249" podStartSLOduration=2.547297882 podStartE2EDuration="2.547297882s" podCreationTimestamp="2025-11-24 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:21:22.520640711 +0000 UTC m=+1.315734006" watchObservedRunningTime="2025-11-24 14:21:22.547297882 +0000 UTC m=+1.342391145"
	Nov 24 14:21:22 newest-cni-948249 kubelet[1296]: I1124 14:21:22.569983    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-948249" podStartSLOduration=1.569964382 podStartE2EDuration="1.569964382s" podCreationTimestamp="2025-11-24 14:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:21:22.547676905 +0000 UTC m=+1.342770184" watchObservedRunningTime="2025-11-24 14:21:22.569964382 +0000 UTC m=+1.365057653"
	Nov 24 14:21:25 newest-cni-948249 kubelet[1296]: I1124 14:21:25.916602    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 14:21:25 newest-cni-948249 kubelet[1296]: I1124 14:21:25.917264    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 14:21:26 newest-cni-948249 kubelet[1296]: I1124 14:21:26.983066    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e153411a-2f4b-4151-b83b-19611f170cfb-xtables-lock\") pod \"kindnet-gtj2g\" (UID: \"e153411a-2f4b-4151-b83b-19611f170cfb\") " pod="kube-system/kindnet-gtj2g"
	Nov 24 14:21:26 newest-cni-948249 kubelet[1296]: I1124 14:21:26.983117    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2cd4d95f-1e99-425c-948b-1ee004fea3ac-kube-proxy\") pod \"kube-proxy-tsnk9\" (UID: \"2cd4d95f-1e99-425c-948b-1ee004fea3ac\") " pod="kube-system/kube-proxy-tsnk9"
	Nov 24 14:21:26 newest-cni-948249 kubelet[1296]: I1124 14:21:26.983185    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cd4d95f-1e99-425c-948b-1ee004fea3ac-xtables-lock\") pod \"kube-proxy-tsnk9\" (UID: \"2cd4d95f-1e99-425c-948b-1ee004fea3ac\") " pod="kube-system/kube-proxy-tsnk9"
	Nov 24 14:21:26 newest-cni-948249 kubelet[1296]: I1124 14:21:26.983203    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cd4d95f-1e99-425c-948b-1ee004fea3ac-lib-modules\") pod \"kube-proxy-tsnk9\" (UID: \"2cd4d95f-1e99-425c-948b-1ee004fea3ac\") " pod="kube-system/kube-proxy-tsnk9"
	Nov 24 14:21:26 newest-cni-948249 kubelet[1296]: I1124 14:21:26.983240    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rddps\" (UniqueName: \"kubernetes.io/projected/2cd4d95f-1e99-425c-948b-1ee004fea3ac-kube-api-access-rddps\") pod \"kube-proxy-tsnk9\" (UID: \"2cd4d95f-1e99-425c-948b-1ee004fea3ac\") " pod="kube-system/kube-proxy-tsnk9"
	Nov 24 14:21:26 newest-cni-948249 kubelet[1296]: I1124 14:21:26.983260    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e153411a-2f4b-4151-b83b-19611f170cfb-cni-cfg\") pod \"kindnet-gtj2g\" (UID: \"e153411a-2f4b-4151-b83b-19611f170cfb\") " pod="kube-system/kindnet-gtj2g"
	Nov 24 14:21:26 newest-cni-948249 kubelet[1296]: I1124 14:21:26.983276    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e153411a-2f4b-4151-b83b-19611f170cfb-lib-modules\") pod \"kindnet-gtj2g\" (UID: \"e153411a-2f4b-4151-b83b-19611f170cfb\") " pod="kube-system/kindnet-gtj2g"
	Nov 24 14:21:26 newest-cni-948249 kubelet[1296]: I1124 14:21:26.983313    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkgt7\" (UniqueName: \"kubernetes.io/projected/e153411a-2f4b-4151-b83b-19611f170cfb-kube-api-access-mkgt7\") pod \"kindnet-gtj2g\" (UID: \"e153411a-2f4b-4151-b83b-19611f170cfb\") " pod="kube-system/kindnet-gtj2g"
	Nov 24 14:21:27 newest-cni-948249 kubelet[1296]: I1124 14:21:27.143982    1296 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:21:27 newest-cni-948249 kubelet[1296]: W1124 14:21:27.242302    1296 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/crio-0d1147dfee83742c59a6b0cbf3de182b5cf94d1f4cbe9208ce3fcba7a7a90cf3 WatchSource:0}: Error finding container 0d1147dfee83742c59a6b0cbf3de182b5cf94d1f4cbe9208ce3fcba7a7a90cf3: Status 404 returned error can't find the container with id 0d1147dfee83742c59a6b0cbf3de182b5cf94d1f4cbe9208ce3fcba7a7a90cf3
	Nov 24 14:21:27 newest-cni-948249 kubelet[1296]: I1124 14:21:27.528899    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gtj2g" podStartSLOduration=1.5288808820000002 podStartE2EDuration="1.528880882s" podCreationTimestamp="2025-11-24 14:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:21:27.474502898 +0000 UTC m=+6.269596169" watchObservedRunningTime="2025-11-24 14:21:27.528880882 +0000 UTC m=+6.323974153"
	Nov 24 14:21:28 newest-cni-948249 kubelet[1296]: I1124 14:21:28.579901    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tsnk9" podStartSLOduration=2.579880352 podStartE2EDuration="2.579880352s" podCreationTimestamp="2025-11-24 14:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:21:27.529506086 +0000 UTC m=+6.324599341" watchObservedRunningTime="2025-11-24 14:21:28.579880352 +0000 UTC m=+7.374973623"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-948249 -n newest-cni-948249
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-948249 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6rv2z storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-948249 describe pod coredns-66bc5c9577-6rv2z storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-948249 describe pod coredns-66bc5c9577-6rv2z storage-provisioner: exit status 1 (86.559223ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6rv2z" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-948249 describe pod coredns-66bc5c9577-6rv2z storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-152851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-152851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (349.037846ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:21:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-152851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-152851 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-152851 describe deploy/metrics-server -n kube-system: exit status 1 (104.061976ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-152851 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-152851
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-152851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a",
	        "Created": "2025-11-24T14:20:10.30310035Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 202811,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:20:10.367692299Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/hostname",
	        "HostsPath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/hosts",
	        "LogPath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a-json.log",
	        "Name": "/default-k8s-diff-port-152851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-152851:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-152851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a",
	                "LowerDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-152851",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-152851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-152851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-152851",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-152851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f651458cdf035b09661eeae1fd73a42274044a10fbf998fb79609e46fff91ee5",
	            "SandboxKey": "/var/run/docker/netns/f651458cdf03",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-152851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:93:22:85:58:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "13603eff9881a10c42cb9841bf658813f5fbc60eabf578cb82466b4c09374f11",
	                    "EndpointID": "909f0be635965b2992a51dc20501e83787dc2018162a9f019b333f387021d316",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-152851",
	                        "bb380e4fa749"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-152851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-152851 logs -n 25: (1.585487271s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:17 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-444317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │                     │
	│ stop    │ -p no-preload-444317 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ addons  │ enable dashboard -p no-preload-444317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ stop    │ -p embed-certs-720293 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-720293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:20 UTC │
	│ image   │ no-preload-444317 image list --format=json                                                                                                                                                                                                    │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ pause   │ -p no-preload-444317 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p disable-driver-mounts-799392                                                                                                                                                                                                               │ disable-driver-mounts-799392 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ image   │ embed-certs-720293 image list --format=json                                                                                                                                                                                                   │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ pause   │ -p embed-certs-720293 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │                     │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-948249 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ stop    │ -p newest-cni-948249 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-948249 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ start   │ -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-152851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:21:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:21:32.241151  209505 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:21:32.241282  209505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:32.241297  209505 out.go:374] Setting ErrFile to fd 2...
	I1124 14:21:32.241302  209505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:32.241541  209505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:21:32.241904  209505 out.go:368] Setting JSON to false
	I1124 14:21:32.242762  209505 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7444,"bootTime":1763986649,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:21:32.242830  209505 start.go:143] virtualization:  
	I1124 14:21:32.247665  209505 out.go:179] * [newest-cni-948249] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:21:32.250555  209505 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:21:32.250705  209505 notify.go:221] Checking for updates...
	I1124 14:21:32.256421  209505 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:21:32.259281  209505 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:21:32.262191  209505 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:21:32.265008  209505 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:21:32.267799  209505 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:21:32.271191  209505 config.go:182] Loaded profile config "newest-cni-948249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:32.271931  209505 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:21:32.306113  209505 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:21:32.306216  209505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:21:32.374178  209505 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:21:32.363824533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:21:32.374286  209505 docker.go:319] overlay module found
	I1124 14:21:32.379470  209505 out.go:179] * Using the docker driver based on existing profile
	I1124 14:21:32.382410  209505 start.go:309] selected driver: docker
	I1124 14:21:32.382432  209505 start.go:927] validating driver "docker" against &{Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:21:32.382557  209505 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:21:32.383311  209505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:21:32.457859  209505 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:21:32.447896223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:21:32.458203  209505 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:21:32.458238  209505 cni.go:84] Creating CNI manager for ""
	I1124 14:21:32.458301  209505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:21:32.458360  209505 start.go:353] cluster config:
	{Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:21:32.463441  209505 out.go:179] * Starting "newest-cni-948249" primary control-plane node in "newest-cni-948249" cluster
	I1124 14:21:32.466278  209505 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:21:32.469284  209505 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:21:32.472125  209505 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:21:32.472178  209505 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:21:32.472193  209505 cache.go:65] Caching tarball of preloaded images
	I1124 14:21:32.472195  209505 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:21:32.472274  209505 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:21:32.472284  209505 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:21:32.472402  209505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/config.json ...
	I1124 14:21:32.492284  209505 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:21:32.492308  209505 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:21:32.492329  209505 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:21:32.492361  209505 start.go:360] acquireMachinesLock for newest-cni-948249: {Name:mk494569275f434d30089868c4fe183eb1572641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:21:32.492422  209505 start.go:364] duration metric: took 38.105µs to acquireMachinesLock for "newest-cni-948249"
	I1124 14:21:32.492446  209505 start.go:96] Skipping create...Using existing machine configuration
	I1124 14:21:32.492454  209505 fix.go:54] fixHost starting: 
	I1124 14:21:32.492727  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:32.516608  209505 fix.go:112] recreateIfNeeded on newest-cni-948249: state=Stopped err=<nil>
	W1124 14:21:32.516639  209505 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 14:21:32.519714  209505 out.go:252] * Restarting existing docker container for "newest-cni-948249" ...
	I1124 14:21:32.519807  209505 cli_runner.go:164] Run: docker start newest-cni-948249
	I1124 14:21:32.778701  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:32.799126  209505 kic.go:430] container "newest-cni-948249" state is running.
	I1124 14:21:32.799541  209505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-948249
	I1124 14:21:32.825330  209505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/config.json ...
	I1124 14:21:32.825574  209505 machine.go:94] provisionDockerMachine start ...
	I1124 14:21:32.825637  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:32.846306  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:32.846635  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:32.846644  209505 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:21:32.847340  209505 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:21:36.007021  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-948249
	
	I1124 14:21:36.007050  209505 ubuntu.go:182] provisioning hostname "newest-cni-948249"
	I1124 14:21:36.007135  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.027540  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:36.027874  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:36.027886  209505 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-948249 && echo "newest-cni-948249" | sudo tee /etc/hostname
	I1124 14:21:36.188922  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-948249
	
	I1124 14:21:36.189003  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.206809  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:36.207126  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:36.207146  209505 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-948249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-948249/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-948249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:21:36.364255  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:21:36.364280  209505 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:21:36.364318  209505 ubuntu.go:190] setting up certificates
	I1124 14:21:36.364328  209505 provision.go:84] configureAuth start
	I1124 14:21:36.364397  209505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-948249
	I1124 14:21:36.382830  209505 provision.go:143] copyHostCerts
	I1124 14:21:36.382908  209505 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:21:36.382929  209505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:21:36.383007  209505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:21:36.383117  209505 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:21:36.383123  209505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:21:36.383152  209505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:21:36.383213  209505 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:21:36.383218  209505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:21:36.383242  209505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:21:36.383297  209505 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.newest-cni-948249 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-948249]
	I1124 14:21:36.538131  209505 provision.go:177] copyRemoteCerts
	I1124 14:21:36.538200  209505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:21:36.538245  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.559067  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:36.663283  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:21:36.683298  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 14:21:36.702560  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:21:36.722026  209505 provision.go:87] duration metric: took 357.675696ms to configureAuth
	I1124 14:21:36.722056  209505 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:21:36.722254  209505 config.go:182] Loaded profile config "newest-cni-948249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:36.722357  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.739185  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:36.739558  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:36.739579  209505 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:21:37.081263  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:21:37.081290  209505 machine.go:97] duration metric: took 4.255705051s to provisionDockerMachine
	I1124 14:21:37.081302  209505 start.go:293] postStartSetup for "newest-cni-948249" (driver="docker")
	I1124 14:21:37.081312  209505 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:21:37.081373  209505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:21:37.081419  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:37.099798  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:37.211272  209505 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:21:37.215786  209505 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:21:37.215827  209505 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:21:37.215860  209505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:21:37.215959  209505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:21:37.216111  209505 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:21:37.216228  209505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:21:37.225396  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	
	
	==> CRI-O <==
	Nov 24 14:21:25 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:25.316683934Z" level=info msg="Created container 900bad27bbd128b37c3af7d8d0a693c0d1ac4fd6ac6ef28f38d08929f756ea57: kube-system/coredns-66bc5c9577-qnfqn/coredns" id=83080b1c-7373-41d3-a1c6-419ae605d261 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:25 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:25.318070213Z" level=info msg="Starting container: 900bad27bbd128b37c3af7d8d0a693c0d1ac4fd6ac6ef28f38d08929f756ea57" id=4c840171-01b0-4b80-ada3-e518b4ee5b00 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:21:25 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:25.322428371Z" level=info msg="Started container" PID=1747 containerID=900bad27bbd128b37c3af7d8d0a693c0d1ac4fd6ac6ef28f38d08929f756ea57 description=kube-system/coredns-66bc5c9577-qnfqn/coredns id=4c840171-01b0-4b80-ada3-e518b4ee5b00 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1c97e081f1b9d7c3bf208e83ff06c59fe4b601e7b3997c2f99f379059ac3e650
	Nov 24 14:21:28 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:28.98392556Z" level=info msg="Running pod sandbox: default/busybox/POD" id=89a1c4c8-657b-4676-837a-41d8ce5abc12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:28 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:28.984015399Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:28 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:28.991091411Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c0d942cadce937c4538b8e362701f0a0a5cc3a712f0c67c5d3e4252477e353e4 UID:cf205d1b-7448-4c54-94b9-88644eb3827e NetNS:/var/run/netns/f18403bb-d7b4-4fad-9ab1-575420e7e6ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b398}] Aliases:map[]}"
	Nov 24 14:21:28 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:28.991158152Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 14:21:29 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:29.00416714Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c0d942cadce937c4538b8e362701f0a0a5cc3a712f0c67c5d3e4252477e353e4 UID:cf205d1b-7448-4c54-94b9-88644eb3827e NetNS:/var/run/netns/f18403bb-d7b4-4fad-9ab1-575420e7e6ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b398}] Aliases:map[]}"
	Nov 24 14:21:29 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:29.004453387Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 14:21:29 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:29.013583082Z" level=info msg="Ran pod sandbox c0d942cadce937c4538b8e362701f0a0a5cc3a712f0c67c5d3e4252477e353e4 with infra container: default/busybox/POD" id=89a1c4c8-657b-4676-837a-41d8ce5abc12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:29 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:29.015102023Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=36e611c4-87d3-45ac-9180-3c2b20d79e47 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:29 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:29.015595812Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=36e611c4-87d3-45ac-9180-3c2b20d79e47 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:29 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:29.015657573Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=36e611c4-87d3-45ac-9180-3c2b20d79e47 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:29 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:29.021623116Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c191a76b-505c-463d-a14f-bb0b95f6e759 name=/runtime.v1.ImageService/PullImage
	Nov 24 14:21:29 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:29.025284988Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:21:31 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:31.174569245Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=c191a76b-505c-463d-a14f-bb0b95f6e759 name=/runtime.v1.ImageService/PullImage
	Nov 24 14:21:31 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:31.175572545Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dfe374ec-8457-4ea5-a8ad-3243814c4d5f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:31 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:31.17750068Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c7a68296-ea23-4096-a54d-0a5750bf9b48 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:31 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:31.185497768Z" level=info msg="Creating container: default/busybox/busybox" id=464da9a7-d299-463a-be96-049069a4f7c6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:31 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:31.185743818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:31 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:31.190597866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:31 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:31.191098302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:31 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:31.205533939Z" level=info msg="Created container 6a5897540dfdfcbc0e7469b9b38b3f4c63c915c848ab867c89b94ab4f721851d: default/busybox/busybox" id=464da9a7-d299-463a-be96-049069a4f7c6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:31 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:31.208423257Z" level=info msg="Starting container: 6a5897540dfdfcbc0e7469b9b38b3f4c63c915c848ab867c89b94ab4f721851d" id=f5d838a9-115b-4b44-86f4-b1d1c15f1bc5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:21:31 default-k8s-diff-port-152851 crio[839]: time="2025-11-24T14:21:31.210948682Z" level=info msg="Started container" PID=1808 containerID=6a5897540dfdfcbc0e7469b9b38b3f4c63c915c848ab867c89b94ab4f721851d description=default/busybox/busybox id=f5d838a9-115b-4b44-86f4-b1d1c15f1bc5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0d942cadce937c4538b8e362701f0a0a5cc3a712f0c67c5d3e4252477e353e4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	6a5897540dfdf       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   c0d942cadce93       busybox                                                default
	900bad27bbd12       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   1c97e081f1b9d       coredns-66bc5c9577-qnfqn                               kube-system
	7f8e2bfde19cf       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   f1890a095c8f7       storage-provisioner                                    kube-system
	786ab448b3eaa       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   abf6793a68ee7       kube-proxy-m92jb                                       kube-system
	85181b57f2464       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   fdc4532e68ad7       kindnet-4j292                                          kube-system
	eb23480a291f3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   ac29ef06a0790       kube-scheduler-default-k8s-diff-port-152851            kube-system
	5669e74665fd1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   62132e1d2ccc6       kube-apiserver-default-k8s-diff-port-152851            kube-system
	42262c418cec6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   96c9e2792a66d       etcd-default-k8s-diff-port-152851                      kube-system
	6038a8ab7b6fb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   97353286bc236       kube-controller-manager-default-k8s-diff-port-152851   kube-system
	
	
	==> coredns [900bad27bbd128b37c3af7d8d0a693c0d1ac4fd6ac6ef28f38d08929f756ea57] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39694 - 35917 "HINFO IN 4147776293000950129.2561380779220216701. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012804211s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-152851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-152851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=default-k8s-diff-port-152851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_20_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:20:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-152851
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:21:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:21:39 +0000   Mon, 24 Nov 2025 14:20:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:21:39 +0000   Mon, 24 Nov 2025 14:20:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:21:39 +0000   Mon, 24 Nov 2025 14:20:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:21:39 +0000   Mon, 24 Nov 2025 14:21:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-152851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                854b5bec-4224-4750-be80-397681d0c7d0
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-qnfqn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-152851                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-4j292                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-152851             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-152851    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-m92jb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-152851             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-152851 event: Registered Node default-k8s-diff-port-152851 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-152851 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 13:57] overlayfs: idmapped layers are currently not supported
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	[ +49.916713] overlayfs: idmapped layers are currently not supported
	[Nov24 14:19] overlayfs: idmapped layers are currently not supported
	[Nov24 14:20] overlayfs: idmapped layers are currently not supported
	[Nov24 14:21] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [42262c418cec631b65189104da338a4a29a15a21931b5c7b247b9c8ca9dc5280] <==
	{"level":"warn","ts":"2025-11-24T14:20:33.673979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.695058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.706516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.744468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.756797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.772668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.792486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.803431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.856521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.859386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.867672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.895942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.942479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.954552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.958444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.979287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:33.994125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:34.016394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:34.032515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:34.052224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:34.078263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:34.136412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:34.137205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:34.154108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:20:34.272879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:21:40 up  2:04,  0 user,  load average: 2.30, 2.77, 2.50
	Linux default-k8s-diff-port-152851 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [85181b57f2464167e66a36c487b0210324e95f51020995485c045e3d1d215d59] <==
	I1124 14:20:44.325111       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:20:44.325473       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:20:44.325732       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:20:44.325746       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:20:44.325769       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:20:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:20:44.530137       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:20:44.534952       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:20:44.535032       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:20:44.536325       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:21:14.531057       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:21:14.535522       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:21:14.536718       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:21:14.536765       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:21:16.135778       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:21:16.135816       1 metrics.go:72] Registering metrics
	I1124 14:21:16.135881       1 controller.go:711] "Syncing nftables rules"
	I1124 14:21:24.537172       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:21:24.537221       1 main.go:301] handling current node
	I1124 14:21:34.531624       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:21:34.531664       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5669e74665fd1313526813bc8fa3e15bfbb5fea6b4bc118563c2217c0c26d5cf] <==
	E1124 14:20:35.358152       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1124 14:20:35.370037       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:20:35.370448       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 14:20:35.383870       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 14:20:35.389648       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 14:20:35.395002       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:20:35.570249       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:20:35.971283       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:20:35.979518       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:20:35.979603       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:20:36.727749       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:20:36.778664       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:20:36.865369       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:20:36.873766       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 14:20:36.875582       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:20:36.880693       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:20:37.351136       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:20:37.936689       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:20:37.952971       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:20:37.974810       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:20:43.059120       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:20:43.066532       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:20:43.306333       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 14:20:43.643096       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1124 14:21:37.848405       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:54416: use of closed network connection
	
	
	==> kube-controller-manager [6038a8ab7b6fb78560d82d3433ce6f4a43bf9ffe5fb223ae70a94321eca614c2] <==
	I1124 14:20:42.403702       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:20:42.403731       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:20:42.403758       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:20:42.404081       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:20:42.404669       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:20:42.404776       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:20:42.406856       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:20:42.418866       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:20:42.420666       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 14:20:42.435661       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:20:42.445710       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 14:20:42.445750       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 14:20:42.445771       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:20:42.445887       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 14:20:42.446183       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:20:42.446249       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:20:42.450790       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:20:42.450838       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:20:42.451145       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:20:42.452906       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:20:42.453027       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:20:42.453425       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:20:42.462022       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-152851" podCIDRs=["10.244.0.0/24"]
	I1124 14:20:42.463808       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:21:27.405327       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [786ab448b3eaab97e618d3b82d1b52a9f1ae4af8f04f159a8520294cf5dee995] <==
	I1124 14:20:44.447235       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:20:44.577262       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:20:44.677793       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:20:44.677830       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:20:44.677911       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:20:44.762472       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:20:44.762599       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:20:44.777809       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:20:44.778197       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:20:44.778410       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:20:44.779889       1 config.go:200] "Starting service config controller"
	I1124 14:20:44.780009       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:20:44.780054       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:20:44.780085       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:20:44.780121       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:20:44.780147       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:20:44.780938       1 config.go:309] "Starting node config controller"
	I1124 14:20:44.780999       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:20:44.781027       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:20:44.880671       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:20:44.880720       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:20:44.880766       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [eb23480a291f3f810a1e6746a256ddb10e1427531efe641d8e2c706045dd6f38] <==
	I1124 14:20:35.586589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:20:35.586615       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:20:35.586635       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 14:20:35.596836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:20:35.596963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:20:35.597172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 14:20:35.597231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:20:35.597288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 14:20:35.597340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:20:35.597389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 14:20:35.597439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 14:20:35.597490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:20:35.597566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 14:20:35.607399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:20:35.608830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 14:20:35.612190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:20:35.612637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 14:20:35.612742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 14:20:35.613373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:20:35.613755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 14:20:35.613827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:20:35.613927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 14:20:36.404815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:20:36.462296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1124 14:20:36.987178       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:20:39 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:39.149709    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-152851" podStartSLOduration=2.149692196 podStartE2EDuration="2.149692196s" podCreationTimestamp="2025-11-24 14:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:20:39.123773902 +0000 UTC m=+1.333549069" watchObservedRunningTime="2025-11-24 14:20:39.149692196 +0000 UTC m=+1.359467347"
	Nov 24 14:20:39 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:39.178483    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-152851" podStartSLOduration=1.178462761 podStartE2EDuration="1.178462761s" podCreationTimestamp="2025-11-24 14:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:20:39.16247468 +0000 UTC m=+1.372249847" watchObservedRunningTime="2025-11-24 14:20:39.178462761 +0000 UTC m=+1.388237920"
	Nov 24 14:20:42 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:42.472329    1317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 14:20:42 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:42.473068    1317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 14:20:43 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:43.502798    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/118788fe-af1a-46f0-8ff3-7c4a381d36fd-xtables-lock\") pod \"kube-proxy-m92jb\" (UID: \"118788fe-af1a-46f0-8ff3-7c4a381d36fd\") " pod="kube-system/kube-proxy-m92jb"
	Nov 24 14:20:43 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:43.502935    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/118788fe-af1a-46f0-8ff3-7c4a381d36fd-kube-proxy\") pod \"kube-proxy-m92jb\" (UID: \"118788fe-af1a-46f0-8ff3-7c4a381d36fd\") " pod="kube-system/kube-proxy-m92jb"
	Nov 24 14:20:43 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:43.502957    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/118788fe-af1a-46f0-8ff3-7c4a381d36fd-lib-modules\") pod \"kube-proxy-m92jb\" (UID: \"118788fe-af1a-46f0-8ff3-7c4a381d36fd\") " pod="kube-system/kube-proxy-m92jb"
	Nov 24 14:20:43 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:43.502976    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klvd7\" (UniqueName: \"kubernetes.io/projected/118788fe-af1a-46f0-8ff3-7c4a381d36fd-kube-api-access-klvd7\") pod \"kube-proxy-m92jb\" (UID: \"118788fe-af1a-46f0-8ff3-7c4a381d36fd\") " pod="kube-system/kube-proxy-m92jb"
	Nov 24 14:20:43 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:43.751489    1317 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:20:43 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:43.806258    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t75m4\" (UniqueName: \"kubernetes.io/projected/b23f3231-3c24-4e8a-bb05-74e475601643-kube-api-access-t75m4\") pod \"kindnet-4j292\" (UID: \"b23f3231-3c24-4e8a-bb05-74e475601643\") " pod="kube-system/kindnet-4j292"
	Nov 24 14:20:43 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:43.806338    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b23f3231-3c24-4e8a-bb05-74e475601643-cni-cfg\") pod \"kindnet-4j292\" (UID: \"b23f3231-3c24-4e8a-bb05-74e475601643\") " pod="kube-system/kindnet-4j292"
	Nov 24 14:20:43 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:43.806365    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b23f3231-3c24-4e8a-bb05-74e475601643-xtables-lock\") pod \"kindnet-4j292\" (UID: \"b23f3231-3c24-4e8a-bb05-74e475601643\") " pod="kube-system/kindnet-4j292"
	Nov 24 14:20:43 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:43.806389    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b23f3231-3c24-4e8a-bb05-74e475601643-lib-modules\") pod \"kindnet-4j292\" (UID: \"b23f3231-3c24-4e8a-bb05-74e475601643\") " pod="kube-system/kindnet-4j292"
	Nov 24 14:20:45 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:45.230135    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m92jb" podStartSLOduration=2.230078352 podStartE2EDuration="2.230078352s" podCreationTimestamp="2025-11-24 14:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:20:45.177268126 +0000 UTC m=+7.387043285" watchObservedRunningTime="2025-11-24 14:20:45.230078352 +0000 UTC m=+7.439853511"
	Nov 24 14:20:47 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:20:47.251224    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4j292" podStartSLOduration=4.251205135 podStartE2EDuration="4.251205135s" podCreationTimestamp="2025-11-24 14:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:20:45.244836586 +0000 UTC m=+7.454612344" watchObservedRunningTime="2025-11-24 14:20:47.251205135 +0000 UTC m=+9.460980360"
	Nov 24 14:21:24 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:21:24.843601    1317 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 14:21:25 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:21:25.013378    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmxgp\" (UniqueName: \"kubernetes.io/projected/21b060b9-5567-4a41-8e79-351855fb6f30-kube-api-access-rmxgp\") pod \"storage-provisioner\" (UID: \"21b060b9-5567-4a41-8e79-351855fb6f30\") " pod="kube-system/storage-provisioner"
	Nov 24 14:21:25 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:21:25.013615    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/386494d3-c6d0-46da-898f-5936bcc3bb40-config-volume\") pod \"coredns-66bc5c9577-qnfqn\" (UID: \"386494d3-c6d0-46da-898f-5936bcc3bb40\") " pod="kube-system/coredns-66bc5c9577-qnfqn"
	Nov 24 14:21:25 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:21:25.013772    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcqsf\" (UniqueName: \"kubernetes.io/projected/386494d3-c6d0-46da-898f-5936bcc3bb40-kube-api-access-pcqsf\") pod \"coredns-66bc5c9577-qnfqn\" (UID: \"386494d3-c6d0-46da-898f-5936bcc3bb40\") " pod="kube-system/coredns-66bc5c9577-qnfqn"
	Nov 24 14:21:25 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:21:25.013814    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/21b060b9-5567-4a41-8e79-351855fb6f30-tmp\") pod \"storage-provisioner\" (UID: \"21b060b9-5567-4a41-8e79-351855fb6f30\") " pod="kube-system/storage-provisioner"
	Nov 24 14:21:25 default-k8s-diff-port-152851 kubelet[1317]: W1124 14:21:25.263334    1317 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/crio-1c97e081f1b9d7c3bf208e83ff06c59fe4b601e7b3997c2f99f379059ac3e650 WatchSource:0}: Error finding container 1c97e081f1b9d7c3bf208e83ff06c59fe4b601e7b3997c2f99f379059ac3e650: Status 404 returned error can't find the container with id 1c97e081f1b9d7c3bf208e83ff06c59fe4b601e7b3997c2f99f379059ac3e650
	Nov 24 14:21:26 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:21:26.317355    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.317334316 podStartE2EDuration="41.317334316s" podCreationTimestamp="2025-11-24 14:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:21:26.280521085 +0000 UTC m=+48.490296244" watchObservedRunningTime="2025-11-24 14:21:26.317334316 +0000 UTC m=+48.527109467"
	Nov 24 14:21:28 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:21:28.674402    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qnfqn" podStartSLOduration=45.674383757 podStartE2EDuration="45.674383757s" podCreationTimestamp="2025-11-24 14:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:21:26.319513842 +0000 UTC m=+48.529289026" watchObservedRunningTime="2025-11-24 14:21:28.674383757 +0000 UTC m=+50.884158908"
	Nov 24 14:21:28 default-k8s-diff-port-152851 kubelet[1317]: I1124 14:21:28.742169    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9464l\" (UniqueName: \"kubernetes.io/projected/cf205d1b-7448-4c54-94b9-88644eb3827e-kube-api-access-9464l\") pod \"busybox\" (UID: \"cf205d1b-7448-4c54-94b9-88644eb3827e\") " pod="default/busybox"
	Nov 24 14:21:29 default-k8s-diff-port-152851 kubelet[1317]: W1124 14:21:29.011597    1317 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/crio-c0d942cadce937c4538b8e362701f0a0a5cc3a712f0c67c5d3e4252477e353e4 WatchSource:0}: Error finding container c0d942cadce937c4538b8e362701f0a0a5cc3a712f0c67c5d3e4252477e353e4: Status 404 returned error can't find the container with id c0d942cadce937c4538b8e362701f0a0a5cc3a712f0c67c5d3e4252477e353e4
	
	
	==> storage-provisioner [7f8e2bfde19cfeb2c411faf49ba4d89c02a9c97fa1b7bf15a8fd642eeb7e9356] <==
	I1124 14:21:25.341577       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:21:25.385677       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:21:25.385816       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:21:25.387821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:25.396159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:21:25.396409       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:21:25.396628       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-152851_69ad1047-d7a6-4f8e-8140-b4d07ba4c635!
	I1124 14:21:25.400536       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d5980bf8-00ae-4d19-87f0-18805e995386", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-152851_69ad1047-d7a6-4f8e-8140-b4d07ba4c635 became leader
	W1124 14:21:25.400889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:25.409611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:21:25.500243       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-152851_69ad1047-d7a6-4f8e-8140-b4d07ba4c635!
	W1124 14:21:27.414296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:27.420135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:29.424060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:29.429872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:31.433922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:31.439446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:33.443290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:33.448218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:35.451570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:35.459029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:37.462059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:37.468827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:39.524970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:21:39.546560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-152851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-948249 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-948249 --alsologtostderr -v=1: exit status 80 (1.655227453s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-948249 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:21:48.657332  211906 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:21:48.657507  211906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:48.657537  211906 out.go:374] Setting ErrFile to fd 2...
	I1124 14:21:48.657581  211906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:48.657862  211906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:21:48.658193  211906 out.go:368] Setting JSON to false
	I1124 14:21:48.658245  211906 mustload.go:66] Loading cluster: newest-cni-948249
	I1124 14:21:48.658669  211906 config.go:182] Loaded profile config "newest-cni-948249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:48.659255  211906 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:48.679564  211906 host.go:66] Checking if "newest-cni-948249" exists ...
	I1124 14:21:48.679882  211906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:21:48.748958  211906 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-24 14:21:48.737317833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:21:48.749598  211906 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-948249 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 14:21:48.753955  211906 out.go:179] * Pausing node newest-cni-948249 ... 
	I1124 14:21:48.756963  211906 host.go:66] Checking if "newest-cni-948249" exists ...
	I1124 14:21:48.757326  211906 ssh_runner.go:195] Run: systemctl --version
	I1124 14:21:48.757374  211906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:48.776747  211906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:48.892087  211906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:21:48.908540  211906 pause.go:52] kubelet running: true
	I1124 14:21:48.908625  211906 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:21:49.194714  211906 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:21:49.194824  211906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:21:49.264661  211906 cri.go:89] found id: "cb6d64e8ea20c49cf95a80ff6125d64c28760dad4ffb41835bacb5da24d441ad"
	I1124 14:21:49.264682  211906 cri.go:89] found id: "da5f54f6db301a368f859207f45ea734aa7736c9ded63a62f5af397e2d1affc4"
	I1124 14:21:49.264688  211906 cri.go:89] found id: "7c2103ac3f27c474a4868f8f4a6aad887ec147e5c1701cdeb90f84a5a85c8b8c"
	I1124 14:21:49.264692  211906 cri.go:89] found id: "a4c6511774af615aa1669300a2c6d04bd89dfe944ed96314161c9a512760f916"
	I1124 14:21:49.264695  211906 cri.go:89] found id: "74e945fd0551cbdb4d26a6a893c168c98947e7a7d5498e5bd8a088068bacefc7"
	I1124 14:21:49.264699  211906 cri.go:89] found id: "ad4b785b35c8646bcdb429734f18a147ca4264fd37bc967141b4cdc9b42a59a0"
	I1124 14:21:49.264702  211906 cri.go:89] found id: ""
	I1124 14:21:49.264753  211906 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:21:49.283020  211906 retry.go:31] will retry after 133.554058ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:21:49Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:21:49.417409  211906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:21:49.434765  211906 pause.go:52] kubelet running: false
	I1124 14:21:49.434881  211906 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:21:49.593278  211906 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:21:49.593367  211906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:21:49.691297  211906 cri.go:89] found id: "cb6d64e8ea20c49cf95a80ff6125d64c28760dad4ffb41835bacb5da24d441ad"
	I1124 14:21:49.691326  211906 cri.go:89] found id: "da5f54f6db301a368f859207f45ea734aa7736c9ded63a62f5af397e2d1affc4"
	I1124 14:21:49.691331  211906 cri.go:89] found id: "7c2103ac3f27c474a4868f8f4a6aad887ec147e5c1701cdeb90f84a5a85c8b8c"
	I1124 14:21:49.691337  211906 cri.go:89] found id: "a4c6511774af615aa1669300a2c6d04bd89dfe944ed96314161c9a512760f916"
	I1124 14:21:49.691348  211906 cri.go:89] found id: "74e945fd0551cbdb4d26a6a893c168c98947e7a7d5498e5bd8a088068bacefc7"
	I1124 14:21:49.691371  211906 cri.go:89] found id: "ad4b785b35c8646bcdb429734f18a147ca4264fd37bc967141b4cdc9b42a59a0"
	I1124 14:21:49.691376  211906 cri.go:89] found id: ""
	I1124 14:21:49.691436  211906 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:21:49.706792  211906 retry.go:31] will retry after 262.906415ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:21:49Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:21:49.970355  211906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:21:49.983642  211906 pause.go:52] kubelet running: false
	I1124 14:21:49.983707  211906 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:21:50.157644  211906 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:21:50.157786  211906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:21:50.231470  211906 cri.go:89] found id: "cb6d64e8ea20c49cf95a80ff6125d64c28760dad4ffb41835bacb5da24d441ad"
	I1124 14:21:50.231538  211906 cri.go:89] found id: "da5f54f6db301a368f859207f45ea734aa7736c9ded63a62f5af397e2d1affc4"
	I1124 14:21:50.231549  211906 cri.go:89] found id: "7c2103ac3f27c474a4868f8f4a6aad887ec147e5c1701cdeb90f84a5a85c8b8c"
	I1124 14:21:50.231553  211906 cri.go:89] found id: "a4c6511774af615aa1669300a2c6d04bd89dfe944ed96314161c9a512760f916"
	I1124 14:21:50.231557  211906 cri.go:89] found id: "74e945fd0551cbdb4d26a6a893c168c98947e7a7d5498e5bd8a088068bacefc7"
	I1124 14:21:50.231561  211906 cri.go:89] found id: "ad4b785b35c8646bcdb429734f18a147ca4264fd37bc967141b4cdc9b42a59a0"
	I1124 14:21:50.231565  211906 cri.go:89] found id: ""
	I1124 14:21:50.231628  211906 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:21:50.246295  211906 out.go:203] 
	W1124 14:21:50.249197  211906 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:21:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:21:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 14:21:50.249222  211906 out.go:285] * 
	* 
	W1124 14:21:50.254644  211906 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 14:21:50.257623  211906 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-948249 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-948249
helpers_test.go:243: (dbg) docker inspect newest-cni-948249:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713",
	        "Created": "2025-11-24T14:20:58.322672048Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209631,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:21:32.559039857Z",
	            "FinishedAt": "2025-11-24T14:21:31.501883891Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/hostname",
	        "HostsPath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/hosts",
	        "LogPath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713-json.log",
	        "Name": "/newest-cni-948249",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-948249:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-948249",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713",
	                "LowerDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-948249",
	                "Source": "/var/lib/docker/volumes/newest-cni-948249/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-948249",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-948249",
	                "name.minikube.sigs.k8s.io": "newest-cni-948249",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "130ad8f2339db04c2f90785de43dba4041406fcdc11060484f4678d339eca2ea",
	            "SandboxKey": "/var/run/docker/netns/130ad8f2339d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-948249": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:c9:c2:e2:4f:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c3da6258d7ca1e0640d947578734878b1bdb58036b53baabf5783f672d1a649d",
	                    "EndpointID": "0d5a89f2eb646e571cdab4ea300ec0e1b8f7f7069cd9f0bf99c19a650e27ce39",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-948249",
	                        "772438acfd05"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948249 -n newest-cni-948249
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948249 -n newest-cni-948249: exit status 2 (344.150148ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-948249 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-948249 logs -n 25: (1.044071341s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-444317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ stop    │ -p embed-certs-720293 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-720293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:20 UTC │
	│ image   │ no-preload-444317 image list --format=json                                                                                                                                                                                                    │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ pause   │ -p no-preload-444317 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p disable-driver-mounts-799392                                                                                                                                                                                                               │ disable-driver-mounts-799392 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ image   │ embed-certs-720293 image list --format=json                                                                                                                                                                                                   │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ pause   │ -p embed-certs-720293 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │                     │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-948249 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ stop    │ -p newest-cni-948249 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-948249 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ start   │ -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-152851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-152851 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ image   │ newest-cni-948249 image list --format=json                                                                                                                                                                                                    │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ pause   │ -p newest-cni-948249 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:21:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:21:32.241151  209505 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:21:32.241282  209505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:32.241297  209505 out.go:374] Setting ErrFile to fd 2...
	I1124 14:21:32.241302  209505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:32.241541  209505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:21:32.241904  209505 out.go:368] Setting JSON to false
	I1124 14:21:32.242762  209505 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7444,"bootTime":1763986649,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:21:32.242830  209505 start.go:143] virtualization:  
	I1124 14:21:32.247665  209505 out.go:179] * [newest-cni-948249] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:21:32.250555  209505 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:21:32.250705  209505 notify.go:221] Checking for updates...
	I1124 14:21:32.256421  209505 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:21:32.259281  209505 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:21:32.262191  209505 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:21:32.265008  209505 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:21:32.267799  209505 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:21:32.271191  209505 config.go:182] Loaded profile config "newest-cni-948249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:32.271931  209505 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:21:32.306113  209505 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:21:32.306216  209505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:21:32.374178  209505 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:21:32.363824533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:21:32.374286  209505 docker.go:319] overlay module found
	I1124 14:21:32.379470  209505 out.go:179] * Using the docker driver based on existing profile
	I1124 14:21:32.382410  209505 start.go:309] selected driver: docker
	I1124 14:21:32.382432  209505 start.go:927] validating driver "docker" against &{Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:21:32.382557  209505 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:21:32.383311  209505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:21:32.457859  209505 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:21:32.447896223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:21:32.458203  209505 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:21:32.458238  209505 cni.go:84] Creating CNI manager for ""
	I1124 14:21:32.458301  209505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:21:32.458360  209505 start.go:353] cluster config:
	{Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:21:32.463441  209505 out.go:179] * Starting "newest-cni-948249" primary control-plane node in "newest-cni-948249" cluster
	I1124 14:21:32.466278  209505 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:21:32.469284  209505 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:21:32.472125  209505 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:21:32.472178  209505 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:21:32.472193  209505 cache.go:65] Caching tarball of preloaded images
	I1124 14:21:32.472195  209505 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:21:32.472274  209505 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:21:32.472284  209505 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:21:32.472402  209505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/config.json ...
	I1124 14:21:32.492284  209505 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:21:32.492308  209505 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:21:32.492329  209505 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:21:32.492361  209505 start.go:360] acquireMachinesLock for newest-cni-948249: {Name:mk494569275f434d30089868c4fe183eb1572641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:21:32.492422  209505 start.go:364] duration metric: took 38.105µs to acquireMachinesLock for "newest-cni-948249"
	I1124 14:21:32.492446  209505 start.go:96] Skipping create...Using existing machine configuration
	I1124 14:21:32.492454  209505 fix.go:54] fixHost starting: 
	I1124 14:21:32.492727  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:32.516608  209505 fix.go:112] recreateIfNeeded on newest-cni-948249: state=Stopped err=<nil>
	W1124 14:21:32.516639  209505 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 14:21:32.519714  209505 out.go:252] * Restarting existing docker container for "newest-cni-948249" ...
	I1124 14:21:32.519807  209505 cli_runner.go:164] Run: docker start newest-cni-948249
	I1124 14:21:32.778701  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:32.799126  209505 kic.go:430] container "newest-cni-948249" state is running.
	I1124 14:21:32.799541  209505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-948249
	I1124 14:21:32.825330  209505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/config.json ...
	I1124 14:21:32.825574  209505 machine.go:94] provisionDockerMachine start ...
	I1124 14:21:32.825637  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:32.846306  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:32.846635  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:32.846644  209505 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:21:32.847340  209505 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:21:36.007021  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-948249
	
	I1124 14:21:36.007050  209505 ubuntu.go:182] provisioning hostname "newest-cni-948249"
	I1124 14:21:36.007135  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.027540  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:36.027874  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:36.027886  209505 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-948249 && echo "newest-cni-948249" | sudo tee /etc/hostname
	I1124 14:21:36.188922  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-948249
	
	I1124 14:21:36.189003  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.206809  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:36.207126  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:36.207146  209505 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-948249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-948249/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-948249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:21:36.364255  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:21:36.364280  209505 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:21:36.364318  209505 ubuntu.go:190] setting up certificates
	I1124 14:21:36.364328  209505 provision.go:84] configureAuth start
	I1124 14:21:36.364397  209505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-948249
	I1124 14:21:36.382830  209505 provision.go:143] copyHostCerts
	I1124 14:21:36.382908  209505 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:21:36.382929  209505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:21:36.383007  209505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:21:36.383117  209505 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:21:36.383123  209505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:21:36.383152  209505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:21:36.383213  209505 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:21:36.383218  209505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:21:36.383242  209505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:21:36.383297  209505 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.newest-cni-948249 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-948249]
	I1124 14:21:36.538131  209505 provision.go:177] copyRemoteCerts
	I1124 14:21:36.538200  209505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:21:36.538245  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.559067  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:36.663283  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:21:36.683298  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 14:21:36.702560  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:21:36.722026  209505 provision.go:87] duration metric: took 357.675696ms to configureAuth
	I1124 14:21:36.722056  209505 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:21:36.722254  209505 config.go:182] Loaded profile config "newest-cni-948249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:36.722357  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.739185  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:36.739558  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:36.739579  209505 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:21:37.081263  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:21:37.081290  209505 machine.go:97] duration metric: took 4.255705051s to provisionDockerMachine
	I1124 14:21:37.081302  209505 start.go:293] postStartSetup for "newest-cni-948249" (driver="docker")
	I1124 14:21:37.081312  209505 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:21:37.081373  209505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:21:37.081419  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:37.099798  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:37.211272  209505 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:21:37.215786  209505 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:21:37.215827  209505 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:21:37.215860  209505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:21:37.215959  209505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:21:37.216111  209505 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:21:37.216228  209505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:21:37.225396  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:21:37.245136  209505 start.go:296] duration metric: took 163.818459ms for postStartSetup
	I1124 14:21:37.245298  209505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:21:37.245373  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:37.264116  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:37.368517  209505 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:21:37.373685  209505 fix.go:56] duration metric: took 4.881213093s for fixHost
	I1124 14:21:37.373711  209505 start.go:83] releasing machines lock for "newest-cni-948249", held for 4.881275937s
	I1124 14:21:37.373788  209505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-948249
	I1124 14:21:37.390866  209505 ssh_runner.go:195] Run: cat /version.json
	I1124 14:21:37.390922  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:37.391010  209505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:21:37.391073  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:37.412684  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:37.423441  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:37.617432  209505 ssh_runner.go:195] Run: systemctl --version
	I1124 14:21:37.625309  209505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:21:37.664634  209505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:21:37.669012  209505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:21:37.669111  209505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:21:37.678625  209505 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:21:37.678647  209505 start.go:496] detecting cgroup driver to use...
	I1124 14:21:37.678678  209505 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:21:37.678725  209505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:21:37.696878  209505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:21:37.713347  209505 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:21:37.713409  209505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:21:37.733057  209505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:21:37.748307  209505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:21:37.926927  209505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:21:38.120375  209505 docker.go:234] disabling docker service ...
	I1124 14:21:38.120457  209505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:21:38.139289  209505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:21:38.158564  209505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:21:38.326353  209505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:21:38.489413  209505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:21:38.515809  209505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:21:38.534483  209505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:21:38.534554  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.544781  209505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:21:38.544885  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.557097  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.568681  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.578487  209505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:21:38.587788  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.597490  209505 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.608505  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.618618  209505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:21:38.628805  209505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:21:38.637807  209505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:21:38.804972  209505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:21:39.001266  209505 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:21:39.001448  209505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:21:39.010908  209505 start.go:564] Will wait 60s for crictl version
	I1124 14:21:39.010984  209505 ssh_runner.go:195] Run: which crictl
	I1124 14:21:39.016839  209505 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:21:39.061938  209505 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:21:39.062030  209505 ssh_runner.go:195] Run: crio --version
	I1124 14:21:39.101645  209505 ssh_runner.go:195] Run: crio --version
	I1124 14:21:39.133905  209505 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:21:39.136696  209505 cli_runner.go:164] Run: docker network inspect newest-cni-948249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:21:39.159728  209505 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:21:39.164698  209505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:21:39.177696  209505 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 14:21:39.180464  209505 kubeadm.go:884] updating cluster {Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:21:39.180610  209505 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:21:39.180677  209505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:21:39.224164  209505 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:21:39.224188  209505 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:21:39.224232  209505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:21:39.257798  209505 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:21:39.257873  209505 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:21:39.257904  209505 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 14:21:39.258031  209505 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-948249 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:21:39.258147  209505 ssh_runner.go:195] Run: crio config
	I1124 14:21:39.369285  209505 cni.go:84] Creating CNI manager for ""
	I1124 14:21:39.369311  209505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:21:39.369342  209505 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 14:21:39.369379  209505 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-948249 NodeName:newest-cni-948249 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:21:39.369571  209505 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-948249"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:21:39.369665  209505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:21:39.396594  209505 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:21:39.396688  209505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:21:39.405490  209505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 14:21:39.429179  209505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:21:39.444978  209505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1124 14:21:39.459281  209505 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:21:39.463953  209505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:21:39.473533  209505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:21:39.631003  209505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:21:39.656011  209505 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249 for IP: 192.168.85.2
	I1124 14:21:39.656073  209505 certs.go:195] generating shared ca certs ...
	I1124 14:21:39.656092  209505 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:39.656285  209505 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:21:39.656356  209505 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:21:39.656370  209505 certs.go:257] generating profile certs ...
	I1124 14:21:39.656473  209505 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/client.key
	I1124 14:21:39.656630  209505 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.key.dccfb6e0
	I1124 14:21:39.656705  209505 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.key
	I1124 14:21:39.656854  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:21:39.656910  209505 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:21:39.656928  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:21:39.656971  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:21:39.657023  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:21:39.657066  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:21:39.657133  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:21:39.657778  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:21:39.704888  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:21:39.756846  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:21:39.839832  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:21:39.872678  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 14:21:39.920016  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:21:39.947119  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:21:39.971193  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:21:39.998549  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:21:40.033092  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:21:40.058167  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:21:40.081660  209505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:21:40.101101  209505 ssh_runner.go:195] Run: openssl version
	I1124 14:21:40.110919  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:21:40.122397  209505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:21:40.127888  209505 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:21:40.127959  209505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:21:40.178378  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:21:40.188458  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:21:40.199252  209505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:21:40.204270  209505 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:21:40.204349  209505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:21:40.250608  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:21:40.259919  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:21:40.270264  209505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:21:40.275606  209505 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:21:40.275685  209505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:21:40.317953  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:21:40.337546  209505 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:21:40.343196  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:21:40.483727  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:21:40.626585  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:21:40.776251  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:21:40.910171  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:21:41.033463  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:21:41.178601  209505 kubeadm.go:401] StartCluster: {Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:21:41.178697  209505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:21:41.178768  209505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:21:41.277532  209505 cri.go:89] found id: "7c2103ac3f27c474a4868f8f4a6aad887ec147e5c1701cdeb90f84a5a85c8b8c"
	I1124 14:21:41.277551  209505 cri.go:89] found id: "a4c6511774af615aa1669300a2c6d04bd89dfe944ed96314161c9a512760f916"
	I1124 14:21:41.277556  209505 cri.go:89] found id: "74e945fd0551cbdb4d26a6a893c168c98947e7a7d5498e5bd8a088068bacefc7"
	I1124 14:21:41.277559  209505 cri.go:89] found id: "ad4b785b35c8646bcdb429734f18a147ca4264fd37bc967141b4cdc9b42a59a0"
	I1124 14:21:41.277562  209505 cri.go:89] found id: ""
	I1124 14:21:41.277609  209505 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 14:21:41.315942  209505 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:21:41Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:21:41.316039  209505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:21:41.336871  209505 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:21:41.336905  209505 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:21:41.336962  209505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:21:41.350977  209505 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:21:41.351556  209505 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-948249" does not appear in /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:21:41.351973  209505 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-2805/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-948249" cluster setting kubeconfig missing "newest-cni-948249" context setting]
	I1124 14:21:41.352514  209505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:41.354205  209505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:21:41.367733  209505 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 14:21:41.367772  209505 kubeadm.go:602] duration metric: took 30.860152ms to restartPrimaryControlPlane
	I1124 14:21:41.367783  209505 kubeadm.go:403] duration metric: took 189.193178ms to StartCluster
	I1124 14:21:41.367799  209505 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:41.367866  209505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:21:41.368777  209505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:41.369021  209505 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:21:41.369402  209505 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:21:41.369489  209505 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-948249"
	I1124 14:21:41.369508  209505 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-948249"
	W1124 14:21:41.369514  209505 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:21:41.369539  209505 host.go:66] Checking if "newest-cni-948249" exists ...
	I1124 14:21:41.370046  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:41.370430  209505 config.go:182] Loaded profile config "newest-cni-948249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:41.370510  209505 addons.go:70] Setting dashboard=true in profile "newest-cni-948249"
	I1124 14:21:41.370531  209505 addons.go:239] Setting addon dashboard=true in "newest-cni-948249"
	W1124 14:21:41.370539  209505 addons.go:248] addon dashboard should already be in state true
	I1124 14:21:41.370586  209505 host.go:66] Checking if "newest-cni-948249" exists ...
	I1124 14:21:41.371049  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:41.371404  209505 addons.go:70] Setting default-storageclass=true in profile "newest-cni-948249"
	I1124 14:21:41.371426  209505 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-948249"
	I1124 14:21:41.371704  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:41.377420  209505 out.go:179] * Verifying Kubernetes components...
	I1124 14:21:41.380746  209505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:21:41.412294  209505 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:21:41.417172  209505 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:21:41.417198  209505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:21:41.417261  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:41.435586  209505 addons.go:239] Setting addon default-storageclass=true in "newest-cni-948249"
	W1124 14:21:41.435608  209505 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:21:41.435633  209505 host.go:66] Checking if "newest-cni-948249" exists ...
	I1124 14:21:41.436063  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:41.450368  209505 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:21:41.453511  209505 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 14:21:41.456621  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 14:21:41.456648  209505 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 14:21:41.456717  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:41.472927  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:41.487545  209505 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:21:41.487568  209505 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:21:41.487630  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:41.510669  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:41.525666  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:41.852813  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 14:21:41.852841  209505 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 14:21:41.868900  209505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:21:41.953898  209505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:21:41.962185  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 14:21:41.962211  209505 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 14:21:41.977337  209505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:21:42.017315  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 14:21:42.017356  209505 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 14:21:42.090945  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 14:21:42.090985  209505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 14:21:42.157786  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 14:21:42.157831  209505 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 14:21:42.218529  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 14:21:42.218554  209505 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 14:21:42.256038  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 14:21:42.256100  209505 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 14:21:42.285651  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 14:21:42.285680  209505 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 14:21:42.306699  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:21:42.306743  209505 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 14:21:42.336643  209505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:21:47.296008  209505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.427069521s)
	I1124 14:21:47.296072  209505 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.342149619s)
	I1124 14:21:47.296102  209505 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:21:47.296164  209505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:21:47.296228  209505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.318868221s)
	I1124 14:21:47.365590  209505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.028897171s)
	I1124 14:21:47.365816  209505 api_server.go:72] duration metric: took 5.996762488s to wait for apiserver process to appear ...
	I1124 14:21:47.365834  209505 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:21:47.365880  209505 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:21:47.368799  209505 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-948249 addons enable metrics-server
	
	I1124 14:21:47.371588  209505 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 14:21:47.374436  209505 addons.go:530] duration metric: took 6.005027001s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 14:21:47.377218  209505 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:21:47.377284  209505 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:21:47.866966  209505 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:21:47.883151  209505 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 14:21:47.884788  209505 api_server.go:141] control plane version: v1.34.1
	I1124 14:21:47.884849  209505 api_server.go:131] duration metric: took 519.006899ms to wait for apiserver health ...
	I1124 14:21:47.884875  209505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:21:47.889526  209505 system_pods.go:59] 8 kube-system pods found
	I1124 14:21:47.889567  209505 system_pods.go:61] "coredns-66bc5c9577-6rv2z" [f569a6bf-bdcc-4176-8cb8-3bb68921e2da] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:21:47.889577  209505 system_pods.go:61] "etcd-newest-cni-948249" [963d5d58-180c-49d7-81e1-23b0a458bf9b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:21:47.889628  209505 system_pods.go:61] "kindnet-gtj2g" [e153411a-2f4b-4151-b83b-19611f170cfb] Running
	I1124 14:21:47.889638  209505 system_pods.go:61] "kube-apiserver-newest-cni-948249" [a30eecaf-1cbf-4072-a6aa-0c069801cc74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:21:47.889649  209505 system_pods.go:61] "kube-controller-manager-newest-cni-948249" [edd42557-6e50-47eb-90fb-5d1bc56a8943] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:21:47.889655  209505 system_pods.go:61] "kube-proxy-tsnk9" [2cd4d95f-1e99-425c-948b-1ee004fea3ac] Running
	I1124 14:21:47.889661  209505 system_pods.go:61] "kube-scheduler-newest-cni-948249" [139bbe9e-626b-4937-a3a7-1929a3c43762] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:21:47.889696  209505 system_pods.go:61] "storage-provisioner" [c81e4590-cbb7-4278-bd3f-74f5be196395] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:21:47.889709  209505 system_pods.go:74] duration metric: took 4.81568ms to wait for pod list to return data ...
	I1124 14:21:47.889719  209505 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:21:47.891967  209505 default_sa.go:45] found service account: "default"
	I1124 14:21:47.891992  209505 default_sa.go:55] duration metric: took 2.259109ms for default service account to be created ...
	I1124 14:21:47.892005  209505 kubeadm.go:587] duration metric: took 6.522951895s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:21:47.892025  209505 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:21:47.894085  209505 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:21:47.894156  209505 node_conditions.go:123] node cpu capacity is 2
	I1124 14:21:47.894182  209505 node_conditions.go:105] duration metric: took 2.147682ms to run NodePressure ...
	I1124 14:21:47.894203  209505 start.go:242] waiting for startup goroutines ...
	I1124 14:21:47.894224  209505 start.go:247] waiting for cluster config update ...
	I1124 14:21:47.894246  209505 start.go:256] writing updated cluster config ...
	I1124 14:21:47.894551  209505 ssh_runner.go:195] Run: rm -f paused
	I1124 14:21:47.959763  209505 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:21:47.963433  209505 out.go:179] * Done! kubectl is now configured to use "newest-cni-948249" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.124607402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.131209496Z" level=info msg="Running pod sandbox: kube-system/kindnet-gtj2g/POD" id=4870c43a-6dbd-46a1-bd78-fecf2b08f1bb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.131300648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.141547212Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cefa8f05-f10d-4805-8982-15faf44363ae name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.142932317Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4870c43a-6dbd-46a1-bd78-fecf2b08f1bb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.157148234Z" level=info msg="Ran pod sandbox 6b7797698f1732174d6581e838dd3b6f5f501476a63f68fc5933a4d13952da60 with infra container: kube-system/kindnet-gtj2g/POD" id=4870c43a-6dbd-46a1-bd78-fecf2b08f1bb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.166147244Z" level=info msg="Ran pod sandbox c7b69b9a0af124f85295dc30fb04836531e971ac59fd952260995c9f69ab28f8 with infra container: kube-system/kube-proxy-tsnk9/POD" id=cefa8f05-f10d-4805-8982-15faf44363ae name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.169530221Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=590e7fa5-a6b1-4db8-8a00-ba2665c3257f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.186302037Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2a161b99-8e9f-4af4-a6f2-0571f85e2eb1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.186898934Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ebe60da9-b746-4c5f-8e92-8f61cae78498 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.188355975Z" level=info msg="Creating container: kube-system/kindnet-gtj2g/kindnet-cni" id=fa7001ce-3f4d-4fe8-9cc5-b23c5b8fa779 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.18858577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.203914798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.205098876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.208586321Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f5c53638-bb06-4cfb-92ab-5428a524d27b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.235550998Z" level=info msg="Created container da5f54f6db301a368f859207f45ea734aa7736c9ded63a62f5af397e2d1affc4: kube-system/kindnet-gtj2g/kindnet-cni" id=fa7001ce-3f4d-4fe8-9cc5-b23c5b8fa779 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.236512829Z" level=info msg="Creating container: kube-system/kube-proxy-tsnk9/kube-proxy" id=f311ea7e-0448-4063-8952-1f84add9a0c3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.236639641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.245123593Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.245673515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.249838013Z" level=info msg="Starting container: da5f54f6db301a368f859207f45ea734aa7736c9ded63a62f5af397e2d1affc4" id=602504bc-6911-416f-9dbd-70f4008b7076 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.259728044Z" level=info msg="Started container" PID=1064 containerID=da5f54f6db301a368f859207f45ea734aa7736c9ded63a62f5af397e2d1affc4 description=kube-system/kindnet-gtj2g/kindnet-cni id=602504bc-6911-416f-9dbd-70f4008b7076 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6b7797698f1732174d6581e838dd3b6f5f501476a63f68fc5933a4d13952da60
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.524117395Z" level=info msg="Created container cb6d64e8ea20c49cf95a80ff6125d64c28760dad4ffb41835bacb5da24d441ad: kube-system/kube-proxy-tsnk9/kube-proxy" id=f311ea7e-0448-4063-8952-1f84add9a0c3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.527978087Z" level=info msg="Starting container: cb6d64e8ea20c49cf95a80ff6125d64c28760dad4ffb41835bacb5da24d441ad" id=582ac2f2-e652-49ae-978a-5b10c1ac00b0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.531750023Z" level=info msg="Started container" PID=1075 containerID=cb6d64e8ea20c49cf95a80ff6125d64c28760dad4ffb41835bacb5da24d441ad description=kube-system/kube-proxy-tsnk9/kube-proxy id=582ac2f2-e652-49ae-978a-5b10c1ac00b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7b69b9a0af124f85295dc30fb04836531e971ac59fd952260995c9f69ab28f8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cb6d64e8ea20c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   c7b69b9a0af12       kube-proxy-tsnk9                            kube-system
	da5f54f6db301       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   6b7797698f173       kindnet-gtj2g                               kube-system
	7c2103ac3f27c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   885dd9202e22b       kube-scheduler-newest-cni-948249            kube-system
	a4c6511774af6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   ab87cb8e35a51       kube-apiserver-newest-cni-948249            kube-system
	74e945fd0551c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   51a5b7763663d       etcd-newest-cni-948249                      kube-system
	ad4b785b35c86       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   9cffbe011ae7a       kube-controller-manager-newest-cni-948249   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-948249
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-948249
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=newest-cni-948249
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_21_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:21:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-948249
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:21:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:21:46 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:21:46 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:21:46 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 14:21:46 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-948249
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                20c80147-87d0-4669-a827-37cbb2c6caf8
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-948249                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         30s
	  kube-system                 kindnet-gtj2g                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-948249             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-948249    200m (10%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-tsnk9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-948249             100m (5%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23s                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node newest-cni-948249 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node newest-cni-948249 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node newest-cni-948249 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  30s                kubelet          Node newest-cni-948249 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 30s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    30s                kubelet          Node newest-cni-948249 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     30s                kubelet          Node newest-cni-948249 status is now: NodeHasSufficientPID
	  Normal   Starting                 30s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           26s                node-controller  Node newest-cni-948249 event: Registered Node newest-cni-948249 in Controller
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11s (x8 over 12s)  kubelet          Node newest-cni-948249 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 12s)  kubelet          Node newest-cni-948249 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 12s)  kubelet          Node newest-cni-948249 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-948249 event: Registered Node newest-cni-948249 in Controller
	
	
	==> dmesg <==
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	[ +49.916713] overlayfs: idmapped layers are currently not supported
	[Nov24 14:19] overlayfs: idmapped layers are currently not supported
	[Nov24 14:20] overlayfs: idmapped layers are currently not supported
	[Nov24 14:21] overlayfs: idmapped layers are currently not supported
	[ +26.692408] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [74e945fd0551cbdb4d26a6a893c168c98947e7a7d5498e5bd8a088068bacefc7] <==
	{"level":"warn","ts":"2025-11-24T14:21:44.177429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.200314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.218035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.242379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.257127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.271767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.289180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.307741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.322914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.339420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.363882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.380758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.400993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.419178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.437380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.454925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.476782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.494637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.547596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.562768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.575643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.609111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.620496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.640460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.696213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58132","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:21:51 up  2:04,  0 user,  load average: 2.83, 2.86, 2.53
	Linux newest-cni-948249 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [da5f54f6db301a368f859207f45ea734aa7736c9ded63a62f5af397e2d1affc4] <==
	I1124 14:21:46.342792       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:21:46.343075       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:21:46.343275       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:21:46.343292       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:21:46.343303       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:21:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:21:46.561219       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:21:46.581591       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:21:46.581698       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:21:46.581845       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [a4c6511774af615aa1669300a2c6d04bd89dfe944ed96314161c9a512760f916] <==
	I1124 14:21:45.853502       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:21:45.868186       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:21:45.868245       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:21:45.894043       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 14:21:45.894165       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 14:21:45.906311       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:21:45.932206       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 14:21:45.932307       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 14:21:45.932323       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 14:21:45.932583       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:21:45.932636       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:21:45.933330       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 14:21:45.959101       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1124 14:21:46.012165       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:21:46.437170       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:21:46.787949       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:21:46.969022       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:21:47.111139       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:21:47.165167       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:21:47.330367       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.146.244"}
	I1124 14:21:47.358916       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.184.44"}
	I1124 14:21:49.528020       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:21:49.578058       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:21:49.633136       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:21:49.701326       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ad4b785b35c8646bcdb429734f18a147ca4264fd37bc967141b4cdc9b42a59a0] <==
	I1124 14:21:49.022510       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:21:49.022621       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-948249"
	I1124 14:21:49.022700       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 14:21:49.023977       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:21:49.022282       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:21:49.025849       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 14:21:49.046727       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:21:49.055783       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:21:49.055864       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 14:21:49.059648       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:21:49.067339       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:21:49.070013       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:21:49.071606       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:21:49.071690       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:21:49.071777       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:21:49.071815       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:21:49.071940       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 14:21:49.071981       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:21:49.075263       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 14:21:49.091498       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:21:49.091593       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:21:49.123915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:21:49.123946       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:21:49.123954       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:21:49.127790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cb6d64e8ea20c49cf95a80ff6125d64c28760dad4ffb41835bacb5da24d441ad] <==
	I1124 14:21:47.349287       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:21:47.497378       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:21:47.602583       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:21:47.602630       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:21:47.602708       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:21:47.619978       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:21:47.620031       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:21:47.624280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:21:47.624575       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:21:47.624593       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:21:47.626092       1 config.go:200] "Starting service config controller"
	I1124 14:21:47.626158       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:21:47.626283       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:21:47.627175       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:21:47.626302       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:21:47.627188       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:21:47.626949       1 config.go:309] "Starting node config controller"
	I1124 14:21:47.627198       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:21:47.627202       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:21:47.728145       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:21:47.728248       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:21:47.728290       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7c2103ac3f27c474a4868f8f4a6aad887ec147e5c1701cdeb90f84a5a85c8b8c] <==
	I1124 14:21:44.778451       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:21:47.547198       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:21:47.547231       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:21:47.552508       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:21:47.552543       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:21:47.552622       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:21:47.552734       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:21:47.552968       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:21:47.552990       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:21:47.553007       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:21:47.553014       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:21:47.653709       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:21:47.653846       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:21:47.654539       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 24 14:21:42 newest-cni-948249 kubelet[735]: E1124 14:21:42.044023     735 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-948249\" not found" node="newest-cni-948249"
	Nov 24 14:21:44 newest-cni-948249 kubelet[735]: E1124 14:21:44.039602     735 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-948249\" not found" node="newest-cni-948249"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.693987     735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-948249"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.778278     735 apiserver.go:52] "Watching apiserver"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.898158     735 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.938628     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cd4d95f-1e99-425c-948b-1ee004fea3ac-xtables-lock\") pod \"kube-proxy-tsnk9\" (UID: \"2cd4d95f-1e99-425c-948b-1ee004fea3ac\") " pod="kube-system/kube-proxy-tsnk9"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.938686     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e153411a-2f4b-4151-b83b-19611f170cfb-lib-modules\") pod \"kindnet-gtj2g\" (UID: \"e153411a-2f4b-4151-b83b-19611f170cfb\") " pod="kube-system/kindnet-gtj2g"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.938729     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e153411a-2f4b-4151-b83b-19611f170cfb-xtables-lock\") pod \"kindnet-gtj2g\" (UID: \"e153411a-2f4b-4151-b83b-19611f170cfb\") " pod="kube-system/kindnet-gtj2g"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.938759     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cd4d95f-1e99-425c-948b-1ee004fea3ac-lib-modules\") pod \"kube-proxy-tsnk9\" (UID: \"2cd4d95f-1e99-425c-948b-1ee004fea3ac\") " pod="kube-system/kube-proxy-tsnk9"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.938780     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e153411a-2f4b-4151-b83b-19611f170cfb-cni-cfg\") pod \"kindnet-gtj2g\" (UID: \"e153411a-2f4b-4151-b83b-19611f170cfb\") " pod="kube-system/kindnet-gtj2g"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: E1124 14:21:45.987291     735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-948249\" already exists" pod="kube-system/etcd-newest-cni-948249"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.987745     735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-948249"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.987706     735 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.013410     735 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.013525     735 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.013559     735 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.019327     735 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: E1124 14:21:46.047974     735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-948249\" already exists" pod="kube-system/kube-apiserver-newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.048018     735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: E1124 14:21:46.101706     735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-948249\" already exists" pod="kube-system/kube-controller-manager-newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.101748     735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: E1124 14:21:46.128206     735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-948249\" already exists" pod="kube-system/kube-scheduler-newest-cni-948249"
	Nov 24 14:21:49 newest-cni-948249 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:21:49 newest-cni-948249 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:21:49 newest-cni-948249 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-948249 -n newest-cni-948249
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-948249 -n newest-cni-948249: exit status 2 (377.958316ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-948249 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6rv2z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5txnj kubernetes-dashboard-855c9754f9-qd7pt
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-948249 describe pod coredns-66bc5c9577-6rv2z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5txnj kubernetes-dashboard-855c9754f9-qd7pt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-948249 describe pod coredns-66bc5c9577-6rv2z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5txnj kubernetes-dashboard-855c9754f9-qd7pt: exit status 1 (88.020134ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6rv2z" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-5txnj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-qd7pt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-948249 describe pod coredns-66bc5c9577-6rv2z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5txnj kubernetes-dashboard-855c9754f9-qd7pt: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-948249
helpers_test.go:243: (dbg) docker inspect newest-cni-948249:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713",
	        "Created": "2025-11-24T14:20:58.322672048Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209631,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:21:32.559039857Z",
	            "FinishedAt": "2025-11-24T14:21:31.501883891Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/hostname",
	        "HostsPath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/hosts",
	        "LogPath": "/var/lib/docker/containers/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713/772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713-json.log",
	        "Name": "/newest-cni-948249",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-948249:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-948249",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "772438acfd0507cc7ff013b62dafaae325e30233c90e406c54940e5df5577713",
	                "LowerDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9d14b5b721eebc7a335511b1a48a7c6f6dd15a362ce50ae547a78ac50c54fc9f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-948249",
	                "Source": "/var/lib/docker/volumes/newest-cni-948249/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-948249",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-948249",
	                "name.minikube.sigs.k8s.io": "newest-cni-948249",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "130ad8f2339db04c2f90785de43dba4041406fcdc11060484f4678d339eca2ea",
	            "SandboxKey": "/var/run/docker/netns/130ad8f2339d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-948249": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:c9:c2:e2:4f:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c3da6258d7ca1e0640d947578734878b1bdb58036b53baabf5783f672d1a649d",
	                    "EndpointID": "0d5a89f2eb646e571cdab4ea300ec0e1b8f7f7069cd9f0bf99c19a650e27ce39",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-948249",
	                        "772438acfd05"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948249 -n newest-cni-948249
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948249 -n newest-cni-948249: exit status 2 (329.894764ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-948249 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-948249 logs -n 25: (1.146251158s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-444317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:18 UTC │
	│ start   │ -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:18 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable metrics-server -p embed-certs-720293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ stop    │ -p embed-certs-720293 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-720293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ start   │ -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:20 UTC │
	│ image   │ no-preload-444317 image list --format=json                                                                                                                                                                                                    │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │ 24 Nov 25 14:19 UTC │
	│ pause   │ -p no-preload-444317 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p disable-driver-mounts-799392                                                                                                                                                                                                               │ disable-driver-mounts-799392 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ image   │ embed-certs-720293 image list --format=json                                                                                                                                                                                                   │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ pause   │ -p embed-certs-720293 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │                     │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-948249 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ stop    │ -p newest-cni-948249 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-948249 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ start   │ -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-152851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-152851 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ image   │ newest-cni-948249 image list --format=json                                                                                                                                                                                                    │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ pause   │ -p newest-cni-948249 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:21:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:21:32.241151  209505 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:21:32.241282  209505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:32.241297  209505 out.go:374] Setting ErrFile to fd 2...
	I1124 14:21:32.241302  209505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:32.241541  209505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:21:32.241904  209505 out.go:368] Setting JSON to false
	I1124 14:21:32.242762  209505 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7444,"bootTime":1763986649,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:21:32.242830  209505 start.go:143] virtualization:  
	I1124 14:21:32.247665  209505 out.go:179] * [newest-cni-948249] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:21:32.250555  209505 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:21:32.250705  209505 notify.go:221] Checking for updates...
	I1124 14:21:32.256421  209505 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:21:32.259281  209505 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:21:32.262191  209505 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:21:32.265008  209505 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:21:32.267799  209505 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:21:32.271191  209505 config.go:182] Loaded profile config "newest-cni-948249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:32.271931  209505 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:21:32.306113  209505 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:21:32.306216  209505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:21:32.374178  209505 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:21:32.363824533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:21:32.374286  209505 docker.go:319] overlay module found
	I1124 14:21:32.379470  209505 out.go:179] * Using the docker driver based on existing profile
	I1124 14:21:32.382410  209505 start.go:309] selected driver: docker
	I1124 14:21:32.382432  209505 start.go:927] validating driver "docker" against &{Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:21:32.382557  209505 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:21:32.383311  209505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:21:32.457859  209505 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:21:32.447896223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:21:32.458203  209505 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:21:32.458238  209505 cni.go:84] Creating CNI manager for ""
	I1124 14:21:32.458301  209505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:21:32.458360  209505 start.go:353] cluster config:
	{Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:21:32.463441  209505 out.go:179] * Starting "newest-cni-948249" primary control-plane node in "newest-cni-948249" cluster
	I1124 14:21:32.466278  209505 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:21:32.469284  209505 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:21:32.472125  209505 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:21:32.472178  209505 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:21:32.472193  209505 cache.go:65] Caching tarball of preloaded images
	I1124 14:21:32.472195  209505 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:21:32.472274  209505 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:21:32.472284  209505 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:21:32.472402  209505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/config.json ...
	I1124 14:21:32.492284  209505 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:21:32.492308  209505 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:21:32.492329  209505 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:21:32.492361  209505 start.go:360] acquireMachinesLock for newest-cni-948249: {Name:mk494569275f434d30089868c4fe183eb1572641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:21:32.492422  209505 start.go:364] duration metric: took 38.105µs to acquireMachinesLock for "newest-cni-948249"
	I1124 14:21:32.492446  209505 start.go:96] Skipping create...Using existing machine configuration
	I1124 14:21:32.492454  209505 fix.go:54] fixHost starting: 
	I1124 14:21:32.492727  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:32.516608  209505 fix.go:112] recreateIfNeeded on newest-cni-948249: state=Stopped err=<nil>
	W1124 14:21:32.516639  209505 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 14:21:32.519714  209505 out.go:252] * Restarting existing docker container for "newest-cni-948249" ...
	I1124 14:21:32.519807  209505 cli_runner.go:164] Run: docker start newest-cni-948249
	I1124 14:21:32.778701  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:32.799126  209505 kic.go:430] container "newest-cni-948249" state is running.
	I1124 14:21:32.799541  209505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-948249
	I1124 14:21:32.825330  209505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/config.json ...
	I1124 14:21:32.825574  209505 machine.go:94] provisionDockerMachine start ...
	I1124 14:21:32.825637  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:32.846306  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:32.846635  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:32.846644  209505 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:21:32.847340  209505 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:21:36.007021  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-948249
	
	I1124 14:21:36.007050  209505 ubuntu.go:182] provisioning hostname "newest-cni-948249"
	I1124 14:21:36.007135  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.027540  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:36.027874  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:36.027886  209505 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-948249 && echo "newest-cni-948249" | sudo tee /etc/hostname
	I1124 14:21:36.188922  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-948249
	
	I1124 14:21:36.189003  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.206809  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:36.207126  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:36.207146  209505 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-948249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-948249/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-948249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:21:36.364255  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:21:36.364280  209505 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:21:36.364318  209505 ubuntu.go:190] setting up certificates
	I1124 14:21:36.364328  209505 provision.go:84] configureAuth start
	I1124 14:21:36.364397  209505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-948249
	I1124 14:21:36.382830  209505 provision.go:143] copyHostCerts
	I1124 14:21:36.382908  209505 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:21:36.382929  209505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:21:36.383007  209505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:21:36.383117  209505 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:21:36.383123  209505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:21:36.383152  209505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:21:36.383213  209505 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:21:36.383218  209505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:21:36.383242  209505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:21:36.383297  209505 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.newest-cni-948249 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-948249]
	I1124 14:21:36.538131  209505 provision.go:177] copyRemoteCerts
	I1124 14:21:36.538200  209505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:21:36.538245  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.559067  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:36.663283  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:21:36.683298  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 14:21:36.702560  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:21:36.722026  209505 provision.go:87] duration metric: took 357.675696ms to configureAuth
	I1124 14:21:36.722056  209505 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:21:36.722254  209505 config.go:182] Loaded profile config "newest-cni-948249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:36.722357  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:36.739185  209505 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:36.739558  209505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1124 14:21:36.739579  209505 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:21:37.081263  209505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:21:37.081290  209505 machine.go:97] duration metric: took 4.255705051s to provisionDockerMachine
	I1124 14:21:37.081302  209505 start.go:293] postStartSetup for "newest-cni-948249" (driver="docker")
	I1124 14:21:37.081312  209505 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:21:37.081373  209505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:21:37.081419  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:37.099798  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:37.211272  209505 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:21:37.215786  209505 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:21:37.215827  209505 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:21:37.215860  209505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:21:37.215959  209505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:21:37.216111  209505 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:21:37.216228  209505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:21:37.225396  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:21:37.245136  209505 start.go:296] duration metric: took 163.818459ms for postStartSetup
	I1124 14:21:37.245298  209505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:21:37.245373  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:37.264116  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:37.368517  209505 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:21:37.373685  209505 fix.go:56] duration metric: took 4.881213093s for fixHost
	I1124 14:21:37.373711  209505 start.go:83] releasing machines lock for "newest-cni-948249", held for 4.881275937s
	I1124 14:21:37.373788  209505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-948249
	I1124 14:21:37.390866  209505 ssh_runner.go:195] Run: cat /version.json
	I1124 14:21:37.390922  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:37.391010  209505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:21:37.391073  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:37.412684  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:37.423441  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:37.617432  209505 ssh_runner.go:195] Run: systemctl --version
	I1124 14:21:37.625309  209505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:21:37.664634  209505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:21:37.669012  209505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:21:37.669111  209505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:21:37.678625  209505 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:21:37.678647  209505 start.go:496] detecting cgroup driver to use...
	I1124 14:21:37.678678  209505 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:21:37.678725  209505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:21:37.696878  209505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:21:37.713347  209505 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:21:37.713409  209505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:21:37.733057  209505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:21:37.748307  209505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:21:37.926927  209505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:21:38.120375  209505 docker.go:234] disabling docker service ...
	I1124 14:21:38.120457  209505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:21:38.139289  209505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:21:38.158564  209505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:21:38.326353  209505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:21:38.489413  209505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:21:38.515809  209505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:21:38.534483  209505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:21:38.534554  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.544781  209505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:21:38.544885  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.557097  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.568681  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.578487  209505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:21:38.587788  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.597490  209505 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.608505  209505 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:21:38.618618  209505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:21:38.628805  209505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:21:38.637807  209505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:21:38.804972  209505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:21:39.001266  209505 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:21:39.001448  209505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:21:39.010908  209505 start.go:564] Will wait 60s for crictl version
	I1124 14:21:39.010984  209505 ssh_runner.go:195] Run: which crictl
	I1124 14:21:39.016839  209505 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:21:39.061938  209505 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:21:39.062030  209505 ssh_runner.go:195] Run: crio --version
	I1124 14:21:39.101645  209505 ssh_runner.go:195] Run: crio --version
	I1124 14:21:39.133905  209505 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:21:39.136696  209505 cli_runner.go:164] Run: docker network inspect newest-cni-948249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:21:39.159728  209505 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:21:39.164698  209505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:21:39.177696  209505 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 14:21:39.180464  209505 kubeadm.go:884] updating cluster {Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:21:39.180610  209505 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:21:39.180677  209505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:21:39.224164  209505 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:21:39.224188  209505 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:21:39.224232  209505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:21:39.257798  209505 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:21:39.257873  209505 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:21:39.257904  209505 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 14:21:39.258031  209505 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-948249 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:21:39.258147  209505 ssh_runner.go:195] Run: crio config
	I1124 14:21:39.369285  209505 cni.go:84] Creating CNI manager for ""
	I1124 14:21:39.369311  209505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:21:39.369342  209505 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 14:21:39.369379  209505 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-948249 NodeName:newest-cni-948249 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:21:39.369571  209505 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-948249"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:21:39.369665  209505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:21:39.396594  209505 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:21:39.396688  209505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:21:39.405490  209505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 14:21:39.429179  209505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:21:39.444978  209505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1124 14:21:39.459281  209505 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:21:39.463953  209505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:21:39.473533  209505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:21:39.631003  209505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:21:39.656011  209505 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249 for IP: 192.168.85.2
	I1124 14:21:39.656073  209505 certs.go:195] generating shared ca certs ...
	I1124 14:21:39.656092  209505 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:39.656285  209505 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:21:39.656356  209505 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:21:39.656370  209505 certs.go:257] generating profile certs ...
	I1124 14:21:39.656473  209505 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/client.key
	I1124 14:21:39.656630  209505 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.key.dccfb6e0
	I1124 14:21:39.656705  209505 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.key
	I1124 14:21:39.656854  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:21:39.656910  209505 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:21:39.656928  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:21:39.656971  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:21:39.657023  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:21:39.657066  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:21:39.657133  209505 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:21:39.657778  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:21:39.704888  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:21:39.756846  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:21:39.839832  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:21:39.872678  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 14:21:39.920016  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:21:39.947119  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:21:39.971193  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/newest-cni-948249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:21:39.998549  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:21:40.033092  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:21:40.058167  209505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:21:40.081660  209505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:21:40.101101  209505 ssh_runner.go:195] Run: openssl version
	I1124 14:21:40.110919  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:21:40.122397  209505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:21:40.127888  209505 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:21:40.127959  209505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:21:40.178378  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:21:40.188458  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:21:40.199252  209505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:21:40.204270  209505 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:21:40.204349  209505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:21:40.250608  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:21:40.259919  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:21:40.270264  209505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:21:40.275606  209505 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:21:40.275685  209505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:21:40.317953  209505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:21:40.337546  209505 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:21:40.343196  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:21:40.483727  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:21:40.626585  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:21:40.776251  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:21:40.910171  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:21:41.033463  209505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:21:41.178601  209505 kubeadm.go:401] StartCluster: {Name:newest-cni-948249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-948249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:21:41.178697  209505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:21:41.178768  209505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:21:41.277532  209505 cri.go:89] found id: "7c2103ac3f27c474a4868f8f4a6aad887ec147e5c1701cdeb90f84a5a85c8b8c"
	I1124 14:21:41.277551  209505 cri.go:89] found id: "a4c6511774af615aa1669300a2c6d04bd89dfe944ed96314161c9a512760f916"
	I1124 14:21:41.277556  209505 cri.go:89] found id: "74e945fd0551cbdb4d26a6a893c168c98947e7a7d5498e5bd8a088068bacefc7"
	I1124 14:21:41.277559  209505 cri.go:89] found id: "ad4b785b35c8646bcdb429734f18a147ca4264fd37bc967141b4cdc9b42a59a0"
	I1124 14:21:41.277562  209505 cri.go:89] found id: ""
	I1124 14:21:41.277609  209505 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 14:21:41.315942  209505 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:21:41Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:21:41.316039  209505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:21:41.336871  209505 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:21:41.336905  209505 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:21:41.336962  209505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:21:41.350977  209505 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:21:41.351556  209505 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-948249" does not appear in /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:21:41.351973  209505 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-2805/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-948249" cluster setting kubeconfig missing "newest-cni-948249" context setting]
	I1124 14:21:41.352514  209505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:41.354205  209505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:21:41.367733  209505 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 14:21:41.367772  209505 kubeadm.go:602] duration metric: took 30.860152ms to restartPrimaryControlPlane
	I1124 14:21:41.367783  209505 kubeadm.go:403] duration metric: took 189.193178ms to StartCluster
	I1124 14:21:41.367799  209505 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:41.367866  209505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:21:41.368777  209505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:41.369021  209505 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:21:41.369402  209505 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:21:41.369489  209505 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-948249"
	I1124 14:21:41.369508  209505 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-948249"
	W1124 14:21:41.369514  209505 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:21:41.369539  209505 host.go:66] Checking if "newest-cni-948249" exists ...
	I1124 14:21:41.370046  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:41.370430  209505 config.go:182] Loaded profile config "newest-cni-948249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:41.370510  209505 addons.go:70] Setting dashboard=true in profile "newest-cni-948249"
	I1124 14:21:41.370531  209505 addons.go:239] Setting addon dashboard=true in "newest-cni-948249"
	W1124 14:21:41.370539  209505 addons.go:248] addon dashboard should already be in state true
	I1124 14:21:41.370586  209505 host.go:66] Checking if "newest-cni-948249" exists ...
	I1124 14:21:41.371049  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:41.371404  209505 addons.go:70] Setting default-storageclass=true in profile "newest-cni-948249"
	I1124 14:21:41.371426  209505 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-948249"
	I1124 14:21:41.371704  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:41.377420  209505 out.go:179] * Verifying Kubernetes components...
	I1124 14:21:41.380746  209505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:21:41.412294  209505 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:21:41.417172  209505 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:21:41.417198  209505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:21:41.417261  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:41.435586  209505 addons.go:239] Setting addon default-storageclass=true in "newest-cni-948249"
	W1124 14:21:41.435608  209505 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:21:41.435633  209505 host.go:66] Checking if "newest-cni-948249" exists ...
	I1124 14:21:41.436063  209505 cli_runner.go:164] Run: docker container inspect newest-cni-948249 --format={{.State.Status}}
	I1124 14:21:41.450368  209505 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:21:41.453511  209505 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 14:21:41.456621  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 14:21:41.456648  209505 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 14:21:41.456717  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:41.472927  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:41.487545  209505 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:21:41.487568  209505 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:21:41.487630  209505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-948249
	I1124 14:21:41.510669  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:41.525666  209505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/newest-cni-948249/id_rsa Username:docker}
	I1124 14:21:41.852813  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 14:21:41.852841  209505 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 14:21:41.868900  209505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:21:41.953898  209505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:21:41.962185  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 14:21:41.962211  209505 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 14:21:41.977337  209505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:21:42.017315  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 14:21:42.017356  209505 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 14:21:42.090945  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 14:21:42.090985  209505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 14:21:42.157786  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 14:21:42.157831  209505 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 14:21:42.218529  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 14:21:42.218554  209505 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 14:21:42.256038  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 14:21:42.256100  209505 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 14:21:42.285651  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 14:21:42.285680  209505 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 14:21:42.306699  209505 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:21:42.306743  209505 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 14:21:42.336643  209505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:21:47.296008  209505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.427069521s)
	I1124 14:21:47.296072  209505 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.342149619s)
	I1124 14:21:47.296102  209505 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:21:47.296164  209505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:21:47.296228  209505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.318868221s)
	I1124 14:21:47.365590  209505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.028897171s)
	I1124 14:21:47.365816  209505 api_server.go:72] duration metric: took 5.996762488s to wait for apiserver process to appear ...
	I1124 14:21:47.365834  209505 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:21:47.365880  209505 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:21:47.368799  209505 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-948249 addons enable metrics-server
	
	I1124 14:21:47.371588  209505 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 14:21:47.374436  209505 addons.go:530] duration metric: took 6.005027001s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 14:21:47.377218  209505 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:21:47.377284  209505 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:21:47.866966  209505 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:21:47.883151  209505 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 14:21:47.884788  209505 api_server.go:141] control plane version: v1.34.1
	I1124 14:21:47.884849  209505 api_server.go:131] duration metric: took 519.006899ms to wait for apiserver health ...
	I1124 14:21:47.884875  209505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:21:47.889526  209505 system_pods.go:59] 8 kube-system pods found
	I1124 14:21:47.889567  209505 system_pods.go:61] "coredns-66bc5c9577-6rv2z" [f569a6bf-bdcc-4176-8cb8-3bb68921e2da] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:21:47.889577  209505 system_pods.go:61] "etcd-newest-cni-948249" [963d5d58-180c-49d7-81e1-23b0a458bf9b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:21:47.889628  209505 system_pods.go:61] "kindnet-gtj2g" [e153411a-2f4b-4151-b83b-19611f170cfb] Running
	I1124 14:21:47.889638  209505 system_pods.go:61] "kube-apiserver-newest-cni-948249" [a30eecaf-1cbf-4072-a6aa-0c069801cc74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:21:47.889649  209505 system_pods.go:61] "kube-controller-manager-newest-cni-948249" [edd42557-6e50-47eb-90fb-5d1bc56a8943] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:21:47.889655  209505 system_pods.go:61] "kube-proxy-tsnk9" [2cd4d95f-1e99-425c-948b-1ee004fea3ac] Running
	I1124 14:21:47.889661  209505 system_pods.go:61] "kube-scheduler-newest-cni-948249" [139bbe9e-626b-4937-a3a7-1929a3c43762] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:21:47.889696  209505 system_pods.go:61] "storage-provisioner" [c81e4590-cbb7-4278-bd3f-74f5be196395] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:21:47.889709  209505 system_pods.go:74] duration metric: took 4.81568ms to wait for pod list to return data ...
	I1124 14:21:47.889719  209505 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:21:47.891967  209505 default_sa.go:45] found service account: "default"
	I1124 14:21:47.891992  209505 default_sa.go:55] duration metric: took 2.259109ms for default service account to be created ...
	I1124 14:21:47.892005  209505 kubeadm.go:587] duration metric: took 6.522951895s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:21:47.892025  209505 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:21:47.894085  209505 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:21:47.894156  209505 node_conditions.go:123] node cpu capacity is 2
	I1124 14:21:47.894182  209505 node_conditions.go:105] duration metric: took 2.147682ms to run NodePressure ...
	I1124 14:21:47.894203  209505 start.go:242] waiting for startup goroutines ...
	I1124 14:21:47.894224  209505 start.go:247] waiting for cluster config update ...
	I1124 14:21:47.894246  209505 start.go:256] writing updated cluster config ...
	I1124 14:21:47.894551  209505 ssh_runner.go:195] Run: rm -f paused
	I1124 14:21:47.959763  209505 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:21:47.963433  209505 out.go:179] * Done! kubectl is now configured to use "newest-cni-948249" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.124607402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.131209496Z" level=info msg="Running pod sandbox: kube-system/kindnet-gtj2g/POD" id=4870c43a-6dbd-46a1-bd78-fecf2b08f1bb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.131300648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.141547212Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cefa8f05-f10d-4805-8982-15faf44363ae name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.142932317Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4870c43a-6dbd-46a1-bd78-fecf2b08f1bb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.157148234Z" level=info msg="Ran pod sandbox 6b7797698f1732174d6581e838dd3b6f5f501476a63f68fc5933a4d13952da60 with infra container: kube-system/kindnet-gtj2g/POD" id=4870c43a-6dbd-46a1-bd78-fecf2b08f1bb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.166147244Z" level=info msg="Ran pod sandbox c7b69b9a0af124f85295dc30fb04836531e971ac59fd952260995c9f69ab28f8 with infra container: kube-system/kube-proxy-tsnk9/POD" id=cefa8f05-f10d-4805-8982-15faf44363ae name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.169530221Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=590e7fa5-a6b1-4db8-8a00-ba2665c3257f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.186302037Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2a161b99-8e9f-4af4-a6f2-0571f85e2eb1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.186898934Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ebe60da9-b746-4c5f-8e92-8f61cae78498 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.188355975Z" level=info msg="Creating container: kube-system/kindnet-gtj2g/kindnet-cni" id=fa7001ce-3f4d-4fe8-9cc5-b23c5b8fa779 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.18858577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.203914798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.205098876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.208586321Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f5c53638-bb06-4cfb-92ab-5428a524d27b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.235550998Z" level=info msg="Created container da5f54f6db301a368f859207f45ea734aa7736c9ded63a62f5af397e2d1affc4: kube-system/kindnet-gtj2g/kindnet-cni" id=fa7001ce-3f4d-4fe8-9cc5-b23c5b8fa779 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.236512829Z" level=info msg="Creating container: kube-system/kube-proxy-tsnk9/kube-proxy" id=f311ea7e-0448-4063-8952-1f84add9a0c3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.236639641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.245123593Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.245673515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.249838013Z" level=info msg="Starting container: da5f54f6db301a368f859207f45ea734aa7736c9ded63a62f5af397e2d1affc4" id=602504bc-6911-416f-9dbd-70f4008b7076 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.259728044Z" level=info msg="Started container" PID=1064 containerID=da5f54f6db301a368f859207f45ea734aa7736c9ded63a62f5af397e2d1affc4 description=kube-system/kindnet-gtj2g/kindnet-cni id=602504bc-6911-416f-9dbd-70f4008b7076 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6b7797698f1732174d6581e838dd3b6f5f501476a63f68fc5933a4d13952da60
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.524117395Z" level=info msg="Created container cb6d64e8ea20c49cf95a80ff6125d64c28760dad4ffb41835bacb5da24d441ad: kube-system/kube-proxy-tsnk9/kube-proxy" id=f311ea7e-0448-4063-8952-1f84add9a0c3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.527978087Z" level=info msg="Starting container: cb6d64e8ea20c49cf95a80ff6125d64c28760dad4ffb41835bacb5da24d441ad" id=582ac2f2-e652-49ae-978a-5b10c1ac00b0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:21:46 newest-cni-948249 crio[615]: time="2025-11-24T14:21:46.531750023Z" level=info msg="Started container" PID=1075 containerID=cb6d64e8ea20c49cf95a80ff6125d64c28760dad4ffb41835bacb5da24d441ad description=kube-system/kube-proxy-tsnk9/kube-proxy id=582ac2f2-e652-49ae-978a-5b10c1ac00b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7b69b9a0af124f85295dc30fb04836531e971ac59fd952260995c9f69ab28f8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cb6d64e8ea20c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   c7b69b9a0af12       kube-proxy-tsnk9                            kube-system
	da5f54f6db301       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   6b7797698f173       kindnet-gtj2g                               kube-system
	7c2103ac3f27c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   885dd9202e22b       kube-scheduler-newest-cni-948249            kube-system
	a4c6511774af6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   ab87cb8e35a51       kube-apiserver-newest-cni-948249            kube-system
	74e945fd0551c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   51a5b7763663d       etcd-newest-cni-948249                      kube-system
	ad4b785b35c86       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   9cffbe011ae7a       kube-controller-manager-newest-cni-948249   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-948249
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-948249
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=newest-cni-948249
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_21_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:21:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-948249
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:21:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:21:46 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:21:46 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:21:46 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 14:21:46 +0000   Mon, 24 Nov 2025 14:21:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-948249
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                20c80147-87d0-4669-a827-37cbb2c6caf8
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-948249                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-gtj2g                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-948249             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-948249    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-tsnk9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-948249             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Warning  CgroupV1                 40s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node newest-cni-948249 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node newest-cni-948249 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node newest-cni-948249 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-948249 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-948249 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-948249 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-948249 event: Registered Node newest-cni-948249 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 14s)  kubelet          Node newest-cni-948249 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 14s)  kubelet          Node newest-cni-948249 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 14s)  kubelet          Node newest-cni-948249 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-948249 event: Registered Node newest-cni-948249 in Controller
	
	
	==> dmesg <==
	[Nov24 13:58] overlayfs: idmapped layers are currently not supported
	[  +2.963383] overlayfs: idmapped layers are currently not supported
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	[ +49.916713] overlayfs: idmapped layers are currently not supported
	[Nov24 14:19] overlayfs: idmapped layers are currently not supported
	[Nov24 14:20] overlayfs: idmapped layers are currently not supported
	[Nov24 14:21] overlayfs: idmapped layers are currently not supported
	[ +26.692408] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [74e945fd0551cbdb4d26a6a893c168c98947e7a7d5498e5bd8a088068bacefc7] <==
	{"level":"warn","ts":"2025-11-24T14:21:44.177429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.200314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.218035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.242379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.257127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.271767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.289180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.307741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.322914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.339420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.363882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.380758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.400993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.419178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.437380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.454925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.476782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.494637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.547596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.562768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.575643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.609111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.620496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.640460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:21:44.696213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58132","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:21:53 up  2:04,  0 user,  load average: 2.83, 2.86, 2.53
	Linux newest-cni-948249 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [da5f54f6db301a368f859207f45ea734aa7736c9ded63a62f5af397e2d1affc4] <==
	I1124 14:21:46.342792       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:21:46.343075       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:21:46.343275       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:21:46.343292       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:21:46.343303       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:21:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:21:46.561219       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:21:46.581591       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:21:46.581698       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:21:46.581845       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [a4c6511774af615aa1669300a2c6d04bd89dfe944ed96314161c9a512760f916] <==
	I1124 14:21:45.853502       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:21:45.868186       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:21:45.868245       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:21:45.894043       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 14:21:45.894165       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 14:21:45.906311       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:21:45.932206       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 14:21:45.932307       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 14:21:45.932323       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 14:21:45.932583       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:21:45.932636       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:21:45.933330       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 14:21:45.959101       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1124 14:21:46.012165       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:21:46.437170       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:21:46.787949       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:21:46.969022       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:21:47.111139       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:21:47.165167       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:21:47.330367       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.146.244"}
	I1124 14:21:47.358916       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.184.44"}
	I1124 14:21:49.528020       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:21:49.578058       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:21:49.633136       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:21:49.701326       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ad4b785b35c8646bcdb429734f18a147ca4264fd37bc967141b4cdc9b42a59a0] <==
	I1124 14:21:49.022510       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:21:49.022621       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-948249"
	I1124 14:21:49.022700       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 14:21:49.023977       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:21:49.022282       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:21:49.025849       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 14:21:49.046727       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:21:49.055783       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:21:49.055864       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 14:21:49.059648       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:21:49.067339       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:21:49.070013       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:21:49.071606       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:21:49.071690       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:21:49.071777       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:21:49.071815       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:21:49.071940       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 14:21:49.071981       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:21:49.075263       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 14:21:49.091498       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:21:49.091593       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:21:49.123915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:21:49.123946       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:21:49.123954       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:21:49.127790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cb6d64e8ea20c49cf95a80ff6125d64c28760dad4ffb41835bacb5da24d441ad] <==
	I1124 14:21:47.349287       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:21:47.497378       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:21:47.602583       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:21:47.602630       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:21:47.602708       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:21:47.619978       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:21:47.620031       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:21:47.624280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:21:47.624575       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:21:47.624593       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:21:47.626092       1 config.go:200] "Starting service config controller"
	I1124 14:21:47.626158       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:21:47.626283       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:21:47.627175       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:21:47.626302       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:21:47.627188       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:21:47.626949       1 config.go:309] "Starting node config controller"
	I1124 14:21:47.627198       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:21:47.627202       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:21:47.728145       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:21:47.728248       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:21:47.728290       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7c2103ac3f27c474a4868f8f4a6aad887ec147e5c1701cdeb90f84a5a85c8b8c] <==
	I1124 14:21:44.778451       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:21:47.547198       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:21:47.547231       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:21:47.552508       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:21:47.552543       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:21:47.552622       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:21:47.552734       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:21:47.552968       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:21:47.552990       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:21:47.553007       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:21:47.553014       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:21:47.653709       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:21:47.653846       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:21:47.654539       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 24 14:21:42 newest-cni-948249 kubelet[735]: E1124 14:21:42.044023     735 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-948249\" not found" node="newest-cni-948249"
	Nov 24 14:21:44 newest-cni-948249 kubelet[735]: E1124 14:21:44.039602     735 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-948249\" not found" node="newest-cni-948249"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.693987     735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-948249"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.778278     735 apiserver.go:52] "Watching apiserver"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.898158     735 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.938628     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cd4d95f-1e99-425c-948b-1ee004fea3ac-xtables-lock\") pod \"kube-proxy-tsnk9\" (UID: \"2cd4d95f-1e99-425c-948b-1ee004fea3ac\") " pod="kube-system/kube-proxy-tsnk9"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.938686     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e153411a-2f4b-4151-b83b-19611f170cfb-lib-modules\") pod \"kindnet-gtj2g\" (UID: \"e153411a-2f4b-4151-b83b-19611f170cfb\") " pod="kube-system/kindnet-gtj2g"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.938729     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e153411a-2f4b-4151-b83b-19611f170cfb-xtables-lock\") pod \"kindnet-gtj2g\" (UID: \"e153411a-2f4b-4151-b83b-19611f170cfb\") " pod="kube-system/kindnet-gtj2g"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.938759     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cd4d95f-1e99-425c-948b-1ee004fea3ac-lib-modules\") pod \"kube-proxy-tsnk9\" (UID: \"2cd4d95f-1e99-425c-948b-1ee004fea3ac\") " pod="kube-system/kube-proxy-tsnk9"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.938780     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e153411a-2f4b-4151-b83b-19611f170cfb-cni-cfg\") pod \"kindnet-gtj2g\" (UID: \"e153411a-2f4b-4151-b83b-19611f170cfb\") " pod="kube-system/kindnet-gtj2g"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: E1124 14:21:45.987291     735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-948249\" already exists" pod="kube-system/etcd-newest-cni-948249"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.987745     735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-948249"
	Nov 24 14:21:45 newest-cni-948249 kubelet[735]: I1124 14:21:45.987706     735 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.013410     735 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.013525     735 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.013559     735 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.019327     735 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: E1124 14:21:46.047974     735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-948249\" already exists" pod="kube-system/kube-apiserver-newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.048018     735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: E1124 14:21:46.101706     735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-948249\" already exists" pod="kube-system/kube-controller-manager-newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: I1124 14:21:46.101748     735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-948249"
	Nov 24 14:21:46 newest-cni-948249 kubelet[735]: E1124 14:21:46.128206     735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-948249\" already exists" pod="kube-system/kube-scheduler-newest-cni-948249"
	Nov 24 14:21:49 newest-cni-948249 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:21:49 newest-cni-948249 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:21:49 newest-cni-948249 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-948249 -n newest-cni-948249
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-948249 -n newest-cni-948249: exit status 2 (431.600551ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-948249 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6rv2z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5txnj kubernetes-dashboard-855c9754f9-qd7pt
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-948249 describe pod coredns-66bc5c9577-6rv2z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5txnj kubernetes-dashboard-855c9754f9-qd7pt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-948249 describe pod coredns-66bc5c9577-6rv2z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5txnj kubernetes-dashboard-855c9754f9-qd7pt: exit status 1 (124.711368ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6rv2z" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-5txnj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-qd7pt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-948249 describe pod coredns-66bc5c9577-6rv2z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5txnj kubernetes-dashboard-855c9754f9-qd7pt: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-152851 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-152851 --alsologtostderr -v=1: exit status 80 (2.490202385s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-152851 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:23:04.950517  217954 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:23:04.950731  217954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:23:04.950763  217954 out.go:374] Setting ErrFile to fd 2...
	I1124 14:23:04.950785  217954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:23:04.951062  217954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:23:04.951459  217954 out.go:368] Setting JSON to false
	I1124 14:23:04.951513  217954 mustload.go:66] Loading cluster: default-k8s-diff-port-152851
	I1124 14:23:04.951977  217954 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:23:04.952478  217954 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:23:04.970138  217954 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:23:04.970448  217954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:23:05.042412  217954 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 14:23:05.031993602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:23:05.043179  217954 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-152851 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 14:23:05.046850  217954 out.go:179] * Pausing node default-k8s-diff-port-152851 ... 
	I1124 14:23:05.050068  217954 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:23:05.050428  217954 ssh_runner.go:195] Run: systemctl --version
	I1124 14:23:05.050479  217954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:23:05.068674  217954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:23:05.174464  217954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:23:05.190882  217954 pause.go:52] kubelet running: true
	I1124 14:23:05.190963  217954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:23:05.482812  217954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:23:05.482900  217954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:23:05.559772  217954 cri.go:89] found id: "8c79dacfe2ce221fbea9a2486ec69afabc6d46c3c4d82441e90690036efd9d52"
	I1124 14:23:05.559797  217954 cri.go:89] found id: "1c698135a263826ddf532798c50e5a822a0a3c1879d5551637c0335e965e578a"
	I1124 14:23:05.559803  217954 cri.go:89] found id: "a1ed8677ffbfcde28812b6af270ab182d73971a25480fded9fd523b9027de0fb"
	I1124 14:23:05.559806  217954 cri.go:89] found id: "39188bea6a564382a6980903b479f266ffccf33b661be945f494d30c4d35a2a1"
	I1124 14:23:05.559809  217954 cri.go:89] found id: "2989f7e752e954727b85260ced94cba345baeb3ca207485296c9174eb09dfd54"
	I1124 14:23:05.559813  217954 cri.go:89] found id: "0c3cb362e4d9189052f992f04fe50fac0c17ff2bd5f72ef4be40e433331ba291"
	I1124 14:23:05.559816  217954 cri.go:89] found id: "79537370e6485bd82564920e391bc4bdfa906e6f8c7d96a71aac5f90ea93fca2"
	I1124 14:23:05.559819  217954 cri.go:89] found id: "1ab258e8f64fcb7b1fb7769531303c2b402470d9e0ae16ffe932a98857f4fd05"
	I1124 14:23:05.559821  217954 cri.go:89] found id: "6f961bf32124218ddecf96482422e8b5741e2a3a4f6241f531f98636f588acab"
	I1124 14:23:05.559828  217954 cri.go:89] found id: "c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366"
	I1124 14:23:05.559831  217954 cri.go:89] found id: "562bb15e55c2c7efd782aafdd77f42b761f21c9e5ed723b3a3473ff620847201"
	I1124 14:23:05.559835  217954 cri.go:89] found id: ""
	I1124 14:23:05.559882  217954 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:23:05.572264  217954 retry.go:31] will retry after 167.515351ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:23:05Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:23:05.740616  217954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:23:05.753880  217954 pause.go:52] kubelet running: false
	I1124 14:23:05.753955  217954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:23:05.962796  217954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:23:05.962892  217954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:23:06.055587  217954 cri.go:89] found id: "8c79dacfe2ce221fbea9a2486ec69afabc6d46c3c4d82441e90690036efd9d52"
	I1124 14:23:06.055653  217954 cri.go:89] found id: "1c698135a263826ddf532798c50e5a822a0a3c1879d5551637c0335e965e578a"
	I1124 14:23:06.055673  217954 cri.go:89] found id: "a1ed8677ffbfcde28812b6af270ab182d73971a25480fded9fd523b9027de0fb"
	I1124 14:23:06.055696  217954 cri.go:89] found id: "39188bea6a564382a6980903b479f266ffccf33b661be945f494d30c4d35a2a1"
	I1124 14:23:06.055737  217954 cri.go:89] found id: "2989f7e752e954727b85260ced94cba345baeb3ca207485296c9174eb09dfd54"
	I1124 14:23:06.055770  217954 cri.go:89] found id: "0c3cb362e4d9189052f992f04fe50fac0c17ff2bd5f72ef4be40e433331ba291"
	I1124 14:23:06.055804  217954 cri.go:89] found id: "79537370e6485bd82564920e391bc4bdfa906e6f8c7d96a71aac5f90ea93fca2"
	I1124 14:23:06.055830  217954 cri.go:89] found id: "1ab258e8f64fcb7b1fb7769531303c2b402470d9e0ae16ffe932a98857f4fd05"
	I1124 14:23:06.055859  217954 cri.go:89] found id: "6f961bf32124218ddecf96482422e8b5741e2a3a4f6241f531f98636f588acab"
	I1124 14:23:06.055889  217954 cri.go:89] found id: "c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366"
	I1124 14:23:06.055908  217954 cri.go:89] found id: "562bb15e55c2c7efd782aafdd77f42b761f21c9e5ed723b3a3473ff620847201"
	I1124 14:23:06.055941  217954 cri.go:89] found id: ""
	I1124 14:23:06.056018  217954 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:23:06.068351  217954 retry.go:31] will retry after 200.021522ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:23:06Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:23:06.268819  217954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:23:06.283780  217954 pause.go:52] kubelet running: false
	I1124 14:23:06.283901  217954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:23:06.469812  217954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:23:06.469944  217954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:23:06.537696  217954 cri.go:89] found id: "8c79dacfe2ce221fbea9a2486ec69afabc6d46c3c4d82441e90690036efd9d52"
	I1124 14:23:06.537719  217954 cri.go:89] found id: "1c698135a263826ddf532798c50e5a822a0a3c1879d5551637c0335e965e578a"
	I1124 14:23:06.537725  217954 cri.go:89] found id: "a1ed8677ffbfcde28812b6af270ab182d73971a25480fded9fd523b9027de0fb"
	I1124 14:23:06.537728  217954 cri.go:89] found id: "39188bea6a564382a6980903b479f266ffccf33b661be945f494d30c4d35a2a1"
	I1124 14:23:06.537732  217954 cri.go:89] found id: "2989f7e752e954727b85260ced94cba345baeb3ca207485296c9174eb09dfd54"
	I1124 14:23:06.537771  217954 cri.go:89] found id: "0c3cb362e4d9189052f992f04fe50fac0c17ff2bd5f72ef4be40e433331ba291"
	I1124 14:23:06.537783  217954 cri.go:89] found id: "79537370e6485bd82564920e391bc4bdfa906e6f8c7d96a71aac5f90ea93fca2"
	I1124 14:23:06.537786  217954 cri.go:89] found id: "1ab258e8f64fcb7b1fb7769531303c2b402470d9e0ae16ffe932a98857f4fd05"
	I1124 14:23:06.537789  217954 cri.go:89] found id: "6f961bf32124218ddecf96482422e8b5741e2a3a4f6241f531f98636f588acab"
	I1124 14:23:06.537797  217954 cri.go:89] found id: "c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366"
	I1124 14:23:06.537806  217954 cri.go:89] found id: "562bb15e55c2c7efd782aafdd77f42b761f21c9e5ed723b3a3473ff620847201"
	I1124 14:23:06.537809  217954 cri.go:89] found id: ""
	I1124 14:23:06.537876  217954 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:23:06.549269  217954 retry.go:31] will retry after 531.816746ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:23:06Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:23:07.082060  217954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:23:07.095248  217954 pause.go:52] kubelet running: false
	I1124 14:23:07.095317  217954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:23:07.282886  217954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:23:07.282997  217954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:23:07.357600  217954 cri.go:89] found id: "8c79dacfe2ce221fbea9a2486ec69afabc6d46c3c4d82441e90690036efd9d52"
	I1124 14:23:07.357618  217954 cri.go:89] found id: "1c698135a263826ddf532798c50e5a822a0a3c1879d5551637c0335e965e578a"
	I1124 14:23:07.357624  217954 cri.go:89] found id: "a1ed8677ffbfcde28812b6af270ab182d73971a25480fded9fd523b9027de0fb"
	I1124 14:23:07.357628  217954 cri.go:89] found id: "39188bea6a564382a6980903b479f266ffccf33b661be945f494d30c4d35a2a1"
	I1124 14:23:07.357631  217954 cri.go:89] found id: "2989f7e752e954727b85260ced94cba345baeb3ca207485296c9174eb09dfd54"
	I1124 14:23:07.357634  217954 cri.go:89] found id: "0c3cb362e4d9189052f992f04fe50fac0c17ff2bd5f72ef4be40e433331ba291"
	I1124 14:23:07.357638  217954 cri.go:89] found id: "79537370e6485bd82564920e391bc4bdfa906e6f8c7d96a71aac5f90ea93fca2"
	I1124 14:23:07.357641  217954 cri.go:89] found id: "1ab258e8f64fcb7b1fb7769531303c2b402470d9e0ae16ffe932a98857f4fd05"
	I1124 14:23:07.357644  217954 cri.go:89] found id: "6f961bf32124218ddecf96482422e8b5741e2a3a4f6241f531f98636f588acab"
	I1124 14:23:07.357650  217954 cri.go:89] found id: "c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366"
	I1124 14:23:07.357654  217954 cri.go:89] found id: "562bb15e55c2c7efd782aafdd77f42b761f21c9e5ed723b3a3473ff620847201"
	I1124 14:23:07.357657  217954 cri.go:89] found id: ""
	I1124 14:23:07.357734  217954 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:23:07.373588  217954 out.go:203] 
	W1124 14:23:07.376658  217954 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:23:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:23:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 14:23:07.376684  217954 out.go:285] * 
	* 
	W1124 14:23:07.382277  217954 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 14:23:07.385217  217954 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-152851 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-152851
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-152851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a",
	        "Created": "2025-11-24T14:20:10.30310035Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213119,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:21:54.577260132Z",
	            "FinishedAt": "2025-11-24T14:21:53.416493117Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/hostname",
	        "HostsPath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/hosts",
	        "LogPath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a-json.log",
	        "Name": "/default-k8s-diff-port-152851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-152851:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-152851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a",
	                "LowerDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-152851",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-152851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-152851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-152851",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-152851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ba22ad00af0daff556271f44a387697422a1062f01305be1cf688a6e70e3cdb2",
	            "SandboxKey": "/var/run/docker/netns/ba22ad00af0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-152851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:e6:ab:e5:98:ed",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "13603eff9881a10c42cb9841bf658813f5fbc60eabf578cb82466b4c09374f11",
	                    "EndpointID": "7bc90bd8c48475fd9c3d578ddf891b9e4adc87e851032e6b3a5bc7a50d5924f2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-152851",
	                        "bb380e4fa749"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851: exit status 2 (423.573171ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-152851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-152851 logs -n 25: (1.352755456s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-444317 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p disable-driver-mounts-799392                                                                                                                                                                                                               │ disable-driver-mounts-799392 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ image   │ embed-certs-720293 image list --format=json                                                                                                                                                                                                   │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ pause   │ -p embed-certs-720293 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │                     │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-948249 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ stop    │ -p newest-cni-948249 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-948249 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ start   │ -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-152851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-152851 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ image   │ newest-cni-948249 image list --format=json                                                                                                                                                                                                    │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ pause   │ -p newest-cni-948249 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-152851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ start   │ -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:22 UTC │
	│ delete  │ -p newest-cni-948249                                                                                                                                                                                                                          │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ delete  │ -p newest-cni-948249                                                                                                                                                                                                                          │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ start   │ -p auto-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-626991                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ image   │ default-k8s-diff-port-152851 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:23 UTC │ 24 Nov 25 14:23 UTC │
	│ pause   │ -p default-k8s-diff-port-152851 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:21:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:21:57.112296  213874 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:21:57.112478  213874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:57.112491  213874 out.go:374] Setting ErrFile to fd 2...
	I1124 14:21:57.112498  213874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:57.112811  213874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:21:57.113303  213874 out.go:368] Setting JSON to false
	I1124 14:21:57.114245  213874 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7469,"bootTime":1763986649,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:21:57.114316  213874 start.go:143] virtualization:  
	I1124 14:21:57.118107  213874 out.go:179] * [auto-626991] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:21:57.121444  213874 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:21:57.121507  213874 notify.go:221] Checking for updates...
	I1124 14:21:57.127576  213874 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:21:57.130724  213874 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:21:57.133783  213874 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:21:57.136708  213874 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:21:57.139549  213874 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:21:57.143065  213874 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:57.143197  213874 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:21:57.168024  213874 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:21:57.168155  213874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:21:57.232809  213874 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 14:21:57.223266812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:21:57.232915  213874 docker.go:319] overlay module found
	I1124 14:21:57.236224  213874 out.go:179] * Using the docker driver based on user configuration
	I1124 14:21:57.239181  213874 start.go:309] selected driver: docker
	I1124 14:21:57.239202  213874 start.go:927] validating driver "docker" against <nil>
	I1124 14:21:57.239216  213874 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:21:57.240061  213874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:21:57.290589  213874 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 14:21:57.281138441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:21:57.290757  213874 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:21:57.291016  213874 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:21:57.293889  213874 out.go:179] * Using Docker driver with root privileges
	I1124 14:21:57.296782  213874 cni.go:84] Creating CNI manager for ""
	I1124 14:21:57.296863  213874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:21:57.296877  213874 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:21:57.296954  213874 start.go:353] cluster config:
	{Name:auto-626991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-626991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1124 14:21:57.300233  213874 out.go:179] * Starting "auto-626991" primary control-plane node in "auto-626991" cluster
	I1124 14:21:57.302941  213874 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:21:57.305898  213874 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:21:57.308820  213874 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:21:57.308867  213874 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:21:57.308878  213874 cache.go:65] Caching tarball of preloaded images
	I1124 14:21:57.308909  213874 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:21:57.308977  213874 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:21:57.308989  213874 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:21:57.309107  213874 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/config.json ...
	I1124 14:21:57.309126  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/config.json: {Name:mkcc529a20f79a6894765fd4690705e536e8a416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:57.328800  213874 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:21:57.328823  213874 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:21:57.328844  213874 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:21:57.328876  213874 start.go:360] acquireMachinesLock for auto-626991: {Name:mk763e6682356d95f9cf88abe6cabc12d66c573c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:21:57.328981  213874 start.go:364] duration metric: took 84.661µs to acquireMachinesLock for "auto-626991"
	I1124 14:21:57.329011  213874 start.go:93] Provisioning new machine with config: &{Name:auto-626991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-626991 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:21:57.329092  213874 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:21:54.538107  212938 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-152851" ...
	I1124 14:21:54.538194  212938 cli_runner.go:164] Run: docker start default-k8s-diff-port-152851
	I1124 14:21:54.866512  212938 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:21:54.921096  212938 kic.go:430] container "default-k8s-diff-port-152851" state is running.
	I1124 14:21:54.921481  212938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:21:54.964203  212938 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/config.json ...
	I1124 14:21:54.965019  212938 machine.go:94] provisionDockerMachine start ...
	I1124 14:21:54.965131  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:54.995485  212938 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:54.995866  212938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 14:21:54.995885  212938 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:21:55.001290  212938 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:21:58.183129  212938 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-152851
	
	I1124 14:21:58.183152  212938 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-152851"
	I1124 14:21:58.183214  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:58.224155  212938 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:58.224473  212938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 14:21:58.224484  212938 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-152851 && echo "default-k8s-diff-port-152851" | sudo tee /etc/hostname
	I1124 14:21:58.404453  212938 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-152851
	
	I1124 14:21:58.404527  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:58.428415  212938 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:58.428734  212938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 14:21:58.428754  212938 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-152851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-152851/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-152851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:21:58.583606  212938 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:21:58.583633  212938 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:21:58.583700  212938 ubuntu.go:190] setting up certificates
	I1124 14:21:58.583711  212938 provision.go:84] configureAuth start
	I1124 14:21:58.583789  212938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:21:58.607183  212938 provision.go:143] copyHostCerts
	I1124 14:21:58.607258  212938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:21:58.607277  212938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:21:58.607387  212938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:21:58.607512  212938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:21:58.607525  212938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:21:58.607561  212938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:21:58.607638  212938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:21:58.607649  212938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:21:58.607680  212938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:21:58.607737  212938 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-152851 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-152851 localhost minikube]
	I1124 14:21:58.733801  212938 provision.go:177] copyRemoteCerts
	I1124 14:21:58.733893  212938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:21:58.733959  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:58.752728  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:21:58.860459  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 14:21:58.880180  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:21:58.899386  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:21:58.918296  212938 provision.go:87] duration metric: took 334.537171ms to configureAuth
	I1124 14:21:58.918391  212938 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:21:58.918695  212938 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:58.918939  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:58.952185  212938 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:58.952918  212938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 14:21:58.952960  212938 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:21:57.332560  213874 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:21:57.332849  213874 start.go:159] libmachine.API.Create for "auto-626991" (driver="docker")
	I1124 14:21:57.332885  213874 client.go:173] LocalClient.Create starting
	I1124 14:21:57.332968  213874 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem
	I1124 14:21:57.333009  213874 main.go:143] libmachine: Decoding PEM data...
	I1124 14:21:57.333034  213874 main.go:143] libmachine: Parsing certificate...
	I1124 14:21:57.333096  213874 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem
	I1124 14:21:57.333117  213874 main.go:143] libmachine: Decoding PEM data...
	I1124 14:21:57.333132  213874 main.go:143] libmachine: Parsing certificate...
	I1124 14:21:57.333510  213874 cli_runner.go:164] Run: docker network inspect auto-626991 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:21:57.350020  213874 cli_runner.go:211] docker network inspect auto-626991 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:21:57.350103  213874 network_create.go:284] running [docker network inspect auto-626991] to gather additional debugging logs...
	I1124 14:21:57.350123  213874 cli_runner.go:164] Run: docker network inspect auto-626991
	W1124 14:21:57.366099  213874 cli_runner.go:211] docker network inspect auto-626991 returned with exit code 1
	I1124 14:21:57.366138  213874 network_create.go:287] error running [docker network inspect auto-626991]: docker network inspect auto-626991: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-626991 not found
	I1124 14:21:57.366154  213874 network_create.go:289] output of [docker network inspect auto-626991]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-626991 not found
	
	** /stderr **
	I1124 14:21:57.366251  213874 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:21:57.384460  213874 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3087ee9f269 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:07:60:94:e6:54} reservation:<nil>}
	I1124 14:21:57.384803  213874 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-87dca5a19352 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:6c:c1:85:45:94} reservation:<nil>}
	I1124 14:21:57.385156  213874 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9e995bd1b79e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:f1:73:f5:6f:cf} reservation:<nil>}
	I1124 14:21:57.385423  213874 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-13603eff9881 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:0b:69:f9:14:50} reservation:<nil>}
	I1124 14:21:57.385824  213874 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e1760}
	I1124 14:21:57.385846  213874 network_create.go:124] attempt to create docker network auto-626991 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 14:21:57.385902  213874 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-626991 auto-626991
	I1124 14:21:57.443679  213874 network_create.go:108] docker network auto-626991 192.168.85.0/24 created
	I1124 14:21:57.443715  213874 kic.go:121] calculated static IP "192.168.85.2" for the "auto-626991" container
	I1124 14:21:57.443788  213874 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:21:57.460620  213874 cli_runner.go:164] Run: docker volume create auto-626991 --label name.minikube.sigs.k8s.io=auto-626991 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:21:57.477462  213874 oci.go:103] Successfully created a docker volume auto-626991
	I1124 14:21:57.477543  213874 cli_runner.go:164] Run: docker run --rm --name auto-626991-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-626991 --entrypoint /usr/bin/test -v auto-626991:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:21:58.033230  213874 oci.go:107] Successfully prepared a docker volume auto-626991
	I1124 14:21:58.033321  213874 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:21:58.033336  213874 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:21:58.033411  213874 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-626991:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:21:59.359890  212938 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:21:59.359914  212938 machine.go:97] duration metric: took 4.394873143s to provisionDockerMachine
	I1124 14:21:59.359925  212938 start.go:293] postStartSetup for "default-k8s-diff-port-152851" (driver="docker")
	I1124 14:21:59.359935  212938 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:21:59.360015  212938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:21:59.360059  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:59.383585  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:21:59.503374  212938 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:21:59.507479  212938 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:21:59.507503  212938 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:21:59.507514  212938 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:21:59.507604  212938 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:21:59.507678  212938 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:21:59.507778  212938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:21:59.515293  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:21:59.532938  212938 start.go:296] duration metric: took 172.997936ms for postStartSetup
	I1124 14:21:59.533101  212938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:21:59.533176  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:59.556813  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:21:59.669065  212938 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:21:59.674177  212938 fix.go:56] duration metric: took 5.171513144s for fixHost
	I1124 14:21:59.674201  212938 start.go:83] releasing machines lock for "default-k8s-diff-port-152851", held for 5.171566584s
	I1124 14:21:59.674279  212938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:21:59.693017  212938 ssh_runner.go:195] Run: cat /version.json
	I1124 14:21:59.693069  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:59.693096  212938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:21:59.693168  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:59.728674  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:21:59.736549  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:21:59.934602  212938 ssh_runner.go:195] Run: systemctl --version
	I1124 14:21:59.941653  212938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:21:59.982049  212938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:21:59.986874  212938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:21:59.986950  212938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:21:59.995899  212938 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:21:59.995919  212938 start.go:496] detecting cgroup driver to use...
	I1124 14:21:59.995949  212938 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:21:59.995995  212938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:22:00.015278  212938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:22:00.036407  212938 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:22:00.036528  212938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:22:00.059047  212938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:22:00.077108  212938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:22:00.409602  212938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:22:00.569066  212938 docker.go:234] disabling docker service ...
	I1124 14:22:00.569193  212938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:22:00.587297  212938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:22:00.602801  212938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:22:00.762853  212938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:22:00.951982  212938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:22:00.967657  212938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:22:00.985913  212938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:22:00.986026  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:00.995886  212938 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:22:00.996010  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.007511  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.019378  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.029975  212938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:22:01.039882  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.050560  212938 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.060716  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.071115  212938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:22:01.080721  212938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:22:01.089681  212938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:01.249309  212938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:22:03.372259  212938 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.122909114s)
	I1124 14:22:03.372352  212938 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:22:03.372417  212938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:22:03.379714  212938 start.go:564] Will wait 60s for crictl version
	I1124 14:22:03.379784  212938 ssh_runner.go:195] Run: which crictl
	I1124 14:22:03.386118  212938 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:22:03.431877  212938 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:22:03.431979  212938 ssh_runner.go:195] Run: crio --version
	I1124 14:22:03.467468  212938 ssh_runner.go:195] Run: crio --version
	I1124 14:22:03.506634  212938 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:22:03.507792  212938 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-152851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:22:03.526466  212938 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:22:03.531687  212938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:22:03.544713  212938 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:22:03.544840  212938 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:22:03.544888  212938 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:22:03.637182  212938 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:22:03.637202  212938 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:22:03.637254  212938 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:22:03.701499  212938 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:22:03.701519  212938 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:22:03.701526  212938 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 14:22:03.701623  212938 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-152851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:22:03.701702  212938 ssh_runner.go:195] Run: crio config
	I1124 14:22:03.840084  212938 cni.go:84] Creating CNI manager for ""
	I1124 14:22:03.840103  212938 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:22:03.840118  212938 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:22:03.840171  212938 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-152851 NodeName:default-k8s-diff-port-152851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:22:03.840291  212938 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-152851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:22:03.840360  212938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:22:03.857652  212938 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:22:03.857729  212938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:22:03.869884  212938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 14:22:03.892019  212938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:22:03.929624  212938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1124 14:22:03.952807  212938 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:22:03.967246  212938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:22:03.985265  212938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:04.185058  212938 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:22:04.210128  212938 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851 for IP: 192.168.76.2
	I1124 14:22:04.210146  212938 certs.go:195] generating shared ca certs ...
	I1124 14:22:04.210162  212938 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:04.210307  212938 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:22:04.210360  212938 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:22:04.210368  212938 certs.go:257] generating profile certs ...
	I1124 14:22:04.210454  212938 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.key
	I1124 14:22:04.210532  212938 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key.ec9a3231
	I1124 14:22:04.210571  212938 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key
	I1124 14:22:04.210687  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:22:04.210723  212938 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:22:04.210732  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:22:04.210768  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:22:04.210792  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:22:04.210819  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:22:04.210864  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:22:04.211577  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:22:04.275796  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:22:04.338011  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:22:04.370874  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:22:04.396771  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 14:22:04.428092  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:22:04.472372  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:22:04.532591  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:22:04.594977  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:22:04.686872  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:22:04.731873  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:22:04.766122  212938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:22:04.802940  212938 ssh_runner.go:195] Run: openssl version
	I1124 14:22:04.810628  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:22:04.823072  212938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:22:04.827880  212938 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:22:04.828020  212938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:22:04.876150  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:22:04.887558  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:22:04.897014  212938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:04.901457  212938 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:04.901570  212938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:04.946597  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:22:04.956191  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:22:04.966117  212938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:22:04.970360  212938 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:22:04.970427  212938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:22:05.014900  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:22:05.023709  212938 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:22:05.028496  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:22:05.072067  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:22:05.131036  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:22:05.190902  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:22:05.246132  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:22:05.304200  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:22:05.402037  212938 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:22:05.402133  212938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:22:05.402197  212938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:22:05.451390  212938 cri.go:89] found id: "0c3cb362e4d9189052f992f04fe50fac0c17ff2bd5f72ef4be40e433331ba291"
	I1124 14:22:05.451409  212938 cri.go:89] found id: "79537370e6485bd82564920e391bc4bdfa906e6f8c7d96a71aac5f90ea93fca2"
	I1124 14:22:05.451414  212938 cri.go:89] found id: "1ab258e8f64fcb7b1fb7769531303c2b402470d9e0ae16ffe932a98857f4fd05"
	I1124 14:22:05.451417  212938 cri.go:89] found id: "6f961bf32124218ddecf96482422e8b5741e2a3a4f6241f531f98636f588acab"
	I1124 14:22:05.451420  212938 cri.go:89] found id: ""
	I1124 14:22:05.451466  212938 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 14:22:05.470571  212938 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:22:05Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:22:05.470694  212938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:22:05.482215  212938 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:22:05.482280  212938 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:22:05.482361  212938 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:22:05.492856  212938 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:22:05.493342  212938 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-152851" does not appear in /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:22:05.493511  212938 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-2805/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-152851" cluster setting kubeconfig missing "default-k8s-diff-port-152851" context setting]
	I1124 14:22:05.493865  212938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:05.495346  212938 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:22:05.507808  212938 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 14:22:05.507887  212938 kubeadm.go:602] duration metric: took 25.58908ms to restartPrimaryControlPlane
	I1124 14:22:05.507922  212938 kubeadm.go:403] duration metric: took 105.893112ms to StartCluster
	I1124 14:22:05.507964  212938 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:05.508068  212938 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:22:05.508824  212938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:05.509320  212938 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:22:05.509453  212938 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:22:05.509550  212938 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-152851"
	I1124 14:22:05.509586  212938 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-152851"
	W1124 14:22:05.509630  212938 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:22:05.509676  212938 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:22:05.510348  212938 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:22:05.510559  212938 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:22:05.511075  212938 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-152851"
	I1124 14:22:05.511091  212938 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-152851"
	W1124 14:22:05.511098  212938 addons.go:248] addon dashboard should already be in state true
	I1124 14:22:05.511118  212938 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:22:05.511581  212938 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:22:05.511923  212938 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-152851"
	I1124 14:22:05.511939  212938 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-152851"
	I1124 14:22:05.512203  212938 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:22:05.515258  212938 out.go:179] * Verifying Kubernetes components...
	I1124 14:22:05.519109  212938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:05.553010  212938 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:22:05.554292  212938 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:22:05.554313  212938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:22:05.554387  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:22:05.575903  212938 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 14:22:05.577036  212938 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:22:03.167196  213874 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-626991:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.133724251s)
	I1124 14:22:03.167232  213874 kic.go:203] duration metric: took 5.133892852s to extract preloaded images to volume ...
	W1124 14:22:03.167391  213874 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 14:22:03.167515  213874 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:22:03.251572  213874 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-626991 --name auto-626991 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-626991 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-626991 --network auto-626991 --ip 192.168.85.2 --volume auto-626991:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:22:03.558028  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Running}}
	I1124 14:22:03.585778  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:03.614697  213874 cli_runner.go:164] Run: docker exec auto-626991 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:22:03.692018  213874 oci.go:144] the created container "auto-626991" has a running status.
	I1124 14:22:03.692044  213874 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa...
	I1124 14:22:03.868968  213874 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:22:03.904706  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:03.931335  213874 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:22:03.931369  213874 kic_runner.go:114] Args: [docker exec --privileged auto-626991 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:22:03.995063  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:04.019068  213874 machine.go:94] provisionDockerMachine start ...
	I1124 14:22:04.019177  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:04.045694  213874 main.go:143] libmachine: Using SSH client type: native
	I1124 14:22:04.046042  213874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 14:22:04.046051  213874 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:22:04.047735  213874 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:22:05.579453  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 14:22:05.579483  212938 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 14:22:05.579549  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:22:05.588291  212938 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-152851"
	W1124 14:22:05.588321  212938 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:22:05.588346  212938 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:22:05.588783  212938 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:22:05.616806  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:22:05.635324  212938 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:22:05.635348  212938 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:22:05.635496  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:22:05.637826  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:22:05.670029  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:22:05.821336  212938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:22:05.863732  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 14:22:05.863821  212938 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 14:22:05.888626  212938 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:22:05.922906  212938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:22:05.945666  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 14:22:05.945740  212938 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 14:22:06.041146  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 14:22:06.041221  212938 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 14:22:06.097892  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 14:22:06.097962  212938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 14:22:06.162376  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 14:22:06.162453  212938 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 14:22:06.196841  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 14:22:06.196920  212938 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 14:22:06.231679  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 14:22:06.231761  212938 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 14:22:06.257050  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 14:22:06.257112  212938 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 14:22:06.277194  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:22:06.277271  212938 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 14:22:06.302608  212938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:22:07.234983  213874 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-626991
	
	I1124 14:22:07.235008  213874 ubuntu.go:182] provisioning hostname "auto-626991"
	I1124 14:22:07.235072  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:07.262773  213874 main.go:143] libmachine: Using SSH client type: native
	I1124 14:22:07.263094  213874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 14:22:07.263114  213874 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-626991 && echo "auto-626991" | sudo tee /etc/hostname
	I1124 14:22:07.459927  213874 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-626991
	
	I1124 14:22:07.460008  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:07.504295  213874 main.go:143] libmachine: Using SSH client type: native
	I1124 14:22:07.504623  213874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 14:22:07.504649  213874 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-626991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-626991/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-626991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:22:07.691584  213874 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:22:07.691613  213874 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:22:07.691648  213874 ubuntu.go:190] setting up certificates
	I1124 14:22:07.691657  213874 provision.go:84] configureAuth start
	I1124 14:22:07.691719  213874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-626991
	I1124 14:22:07.725227  213874 provision.go:143] copyHostCerts
	I1124 14:22:07.725296  213874 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:22:07.725312  213874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:22:07.725398  213874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:22:07.725498  213874 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:22:07.725512  213874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:22:07.725544  213874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:22:07.725612  213874 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:22:07.725622  213874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:22:07.725650  213874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:22:07.725710  213874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.auto-626991 san=[127.0.0.1 192.168.85.2 auto-626991 localhost minikube]
	I1124 14:22:08.211278  213874 provision.go:177] copyRemoteCerts
	I1124 14:22:08.211432  213874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:22:08.211504  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:08.228427  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:08.341632  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:22:08.368710  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 14:22:08.389020  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:22:08.408365  213874 provision.go:87] duration metric: took 716.684234ms to configureAuth
	I1124 14:22:08.408388  213874 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:22:08.408578  213874 config.go:182] Loaded profile config "auto-626991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:22:08.408680  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:08.445227  213874 main.go:143] libmachine: Using SSH client type: native
	I1124 14:22:08.445538  213874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 14:22:08.445553  213874 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:22:08.871041  213874 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:22:08.871063  213874 machine.go:97] duration metric: took 4.851975904s to provisionDockerMachine
	I1124 14:22:08.871074  213874 client.go:176] duration metric: took 11.538178873s to LocalClient.Create
	I1124 14:22:08.871085  213874 start.go:167] duration metric: took 11.538235784s to libmachine.API.Create "auto-626991"
	I1124 14:22:08.871092  213874 start.go:293] postStartSetup for "auto-626991" (driver="docker")
	I1124 14:22:08.871101  213874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:22:08.871167  213874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:22:08.871213  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:08.904432  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:09.024804  213874 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:22:09.028761  213874 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:22:09.028791  213874 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:22:09.028803  213874 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:22:09.028858  213874 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:22:09.028946  213874 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:22:09.029054  213874 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:22:09.045156  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:22:09.083003  213874 start.go:296] duration metric: took 211.897182ms for postStartSetup
	I1124 14:22:09.083494  213874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-626991
	I1124 14:22:09.108914  213874 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/config.json ...
	I1124 14:22:09.109200  213874 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:22:09.109252  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:09.132900  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:09.253451  213874 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:22:09.259225  213874 start.go:128] duration metric: took 11.930118763s to createHost
	I1124 14:22:09.259251  213874 start.go:83] releasing machines lock for "auto-626991", held for 11.930257119s
	I1124 14:22:09.259332  213874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-626991
	I1124 14:22:09.286515  213874 ssh_runner.go:195] Run: cat /version.json
	I1124 14:22:09.286566  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:09.286804  213874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:22:09.286883  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:09.317344  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:09.323555  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:09.443055  213874 ssh_runner.go:195] Run: systemctl --version
	I1124 14:22:09.546776  213874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:22:09.613883  213874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:22:09.621460  213874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:22:09.621529  213874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:22:09.670237  213874 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:22:09.670256  213874 start.go:496] detecting cgroup driver to use...
	I1124 14:22:09.670287  213874 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:22:09.670346  213874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:22:09.702072  213874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:22:09.718703  213874 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:22:09.718818  213874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:22:09.745999  213874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:22:09.774558  213874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:22:09.984371  213874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:22:10.200153  213874 docker.go:234] disabling docker service ...
	I1124 14:22:10.200264  213874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:22:10.236891  213874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:22:10.258029  213874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:22:10.424478  213874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:22:10.611458  213874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:22:10.637245  213874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:22:10.670715  213874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:22:10.670831  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.683752  213874 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:22:10.683874  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.694831  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.704498  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.718447  213874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:22:10.731217  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.741600  213874 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.755147  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.769996  213874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:22:10.780550  213874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:22:10.788750  213874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:10.997921  213874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:22:11.223991  213874 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:22:11.224111  213874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:22:11.227951  213874 start.go:564] Will wait 60s for crictl version
	I1124 14:22:11.228071  213874 ssh_runner.go:195] Run: which crictl
	I1124 14:22:11.235101  213874 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:22:11.282436  213874 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:22:11.282580  213874 ssh_runner.go:195] Run: crio --version
	I1124 14:22:11.331860  213874 ssh_runner.go:195] Run: crio --version
	I1124 14:22:11.373566  213874 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:22:11.375056  213874 cli_runner.go:164] Run: docker network inspect auto-626991 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:22:11.398636  213874 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:22:11.402709  213874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:22:11.413475  213874 kubeadm.go:884] updating cluster {Name:auto-626991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-626991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:22:11.413604  213874 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:22:11.413660  213874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:22:11.463495  213874 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:22:11.463517  213874 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:22:11.463572  213874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:22:11.504077  213874 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:22:11.504096  213874 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:22:11.504103  213874 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 14:22:11.504189  213874 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-626991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-626991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:22:11.504262  213874 ssh_runner.go:195] Run: crio config
	I1124 14:22:11.621271  213874 cni.go:84] Creating CNI manager for ""
	I1124 14:22:11.621419  213874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:22:11.621453  213874 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:22:11.621504  213874 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-626991 NodeName:auto-626991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:22:11.621667  213874 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-626991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:22:11.621773  213874 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:22:11.633665  213874 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:22:11.633792  213874 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:22:11.643525  213874 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1124 14:22:11.672517  213874 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:22:11.698704  213874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1124 14:22:11.712462  213874 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:22:11.716863  213874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:22:11.727305  213874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:11.944624  213874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:22:11.981934  213874 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991 for IP: 192.168.85.2
	I1124 14:22:11.982012  213874 certs.go:195] generating shared ca certs ...
	I1124 14:22:11.982051  213874 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:11.982285  213874 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:22:11.982385  213874 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:22:11.982432  213874 certs.go:257] generating profile certs ...
	I1124 14:22:11.982541  213874 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.key
	I1124 14:22:11.982598  213874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt with IP's: []
	I1124 14:22:12.375789  213874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt ...
	I1124 14:22:12.375874  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: {Name:mk2b6b425e1346bd7b8911f945f39b4335ed2ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.376109  213874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.key ...
	I1124 14:22:12.376144  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.key: {Name:mk9ef474e0734f3338e26c65a52f2e13d7ce4704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.376288  213874 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key.1f66f160
	I1124 14:22:12.376328  213874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt.1f66f160 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 14:22:12.489632  213874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt.1f66f160 ...
	I1124 14:22:12.489715  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt.1f66f160: {Name:mkdc5fc0585688e9c8ae9ddca28ca169dcf9d013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.489922  213874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key.1f66f160 ...
	I1124 14:22:12.489968  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key.1f66f160: {Name:mk614bf7ba6042e262731e77a7b6aba451f3fad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.490099  213874 certs.go:382] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt.1f66f160 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt
	I1124 14:22:12.490226  213874 certs.go:386] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key.1f66f160 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key
	I1124 14:22:12.490324  213874 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.key
	I1124 14:22:12.490364  213874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.crt with IP's: []
	I1124 14:22:12.930229  213874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.crt ...
	I1124 14:22:12.930256  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.crt: {Name:mkd7ff303b0b57ba65b0c2c43834e68800ab93f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.930426  213874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.key ...
	I1124 14:22:12.930433  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.key: {Name:mk154aa7a9ce009650b7dc1e9d5aae783827da18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.930605  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:22:12.930641  213874 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:22:12.930649  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:22:12.930688  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:22:12.930713  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:22:12.930739  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:22:12.930784  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:22:12.931344  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:22:12.977400  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:22:13.105341  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:22:13.134805  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:22:13.156738  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1124 14:22:13.178510  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:22:13.199576  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:22:13.228465  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:22:13.264130  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:22:13.296732  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:22:13.326437  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:22:13.355785  213874 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:22:13.374892  213874 ssh_runner.go:195] Run: openssl version
	I1124 14:22:13.386626  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:22:13.400498  213874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:13.411559  213874 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:13.411725  213874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:13.494822  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:22:13.506215  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:22:13.520108  213874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:22:13.524962  213874 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:22:13.525025  213874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:22:13.577238  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:22:13.593736  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:22:13.603002  213874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:22:13.609518  213874 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:22:13.609584  213874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:22:13.670900  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:22:13.682074  213874 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:22:13.687746  213874 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:22:13.687891  213874 kubeadm.go:401] StartCluster: {Name:auto-626991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-626991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:22:13.687988  213874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:22:13.688073  213874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:22:13.733288  213874 cri.go:89] found id: ""
	I1124 14:22:13.733407  213874 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:22:13.754603  213874 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:22:13.772085  213874 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:22:13.772208  213874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:22:13.790351  213874 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:22:13.790418  213874 kubeadm.go:158] found existing configuration files:
	
	I1124 14:22:13.790494  213874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:22:13.807100  213874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:22:13.807212  213874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:22:13.823285  213874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:22:13.844006  213874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:22:13.844149  213874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:22:13.866170  213874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:22:13.878244  213874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:22:13.878362  213874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:22:13.890208  213874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:22:13.899774  213874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:22:13.899887  213874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:22:13.911670  213874 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:22:13.975089  213874 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:22:13.975333  213874 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:22:14.020093  213874 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:22:14.020257  213874 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:22:14.020335  213874 kubeadm.go:319] OS: Linux
	I1124 14:22:14.020407  213874 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:22:14.020495  213874 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:22:14.020601  213874 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:22:14.020684  213874 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:22:14.020764  213874 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:22:14.020849  213874 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:22:14.020922  213874 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:22:14.021004  213874 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:22:14.021081  213874 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:22:14.152076  213874 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:22:14.152245  213874 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:22:14.152369  213874 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:22:14.164269  213874 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:22:14.728464  212938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.907041262s)
	I1124 14:22:14.728523  212938 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.839836794s)
	I1124 14:22:14.728543  212938 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-152851" to be "Ready" ...
	I1124 14:22:14.728849  212938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.805868009s)
	I1124 14:22:14.781587  212938 node_ready.go:49] node "default-k8s-diff-port-152851" is "Ready"
	I1124 14:22:14.781669  212938 node_ready.go:38] duration metric: took 53.105039ms for node "default-k8s-diff-port-152851" to be "Ready" ...
	I1124 14:22:14.781699  212938 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:22:14.781787  212938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:22:15.045672  212938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.742937231s)
	I1124 14:22:15.045892  212938 api_server.go:72] duration metric: took 9.535272742s to wait for apiserver process to appear ...
	I1124 14:22:15.045947  212938 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:22:15.045984  212938 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 14:22:15.048613  212938 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-152851 addons enable metrics-server
	
	I1124 14:22:15.051644  212938 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 14:22:14.169516  213874 out.go:252]   - Generating certificates and keys ...
	I1124 14:22:14.169676  213874 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:22:14.169795  213874 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:22:14.585454  213874 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:22:15.355462  213874 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:22:16.102126  213874 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:22:16.826876  213874 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:22:17.040300  213874 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:22:17.041012  213874 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-626991 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:22:15.054545  212938 addons.go:530] duration metric: took 9.545086561s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 14:22:15.059246  212938 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:22:15.059275  212938 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:22:15.547073  212938 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 14:22:15.565525  212938 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 14:22:15.566665  212938 api_server.go:141] control plane version: v1.34.1
	I1124 14:22:15.566721  212938 api_server.go:131] duration metric: took 520.749055ms to wait for apiserver health ...
	I1124 14:22:15.566747  212938 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:22:15.575018  212938 system_pods.go:59] 8 kube-system pods found
	I1124 14:22:15.575098  212938 system_pods.go:61] "coredns-66bc5c9577-qnfqn" [386494d3-c6d0-46da-898f-5936bcc3bb40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:22:15.575143  212938 system_pods.go:61] "etcd-default-k8s-diff-port-152851" [73849492-289b-4e8a-b132-076ac817ec77] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:22:15.575174  212938 system_pods.go:61] "kindnet-4j292" [b23f3231-3c24-4e8a-bb05-74e475601643] Running
	I1124 14:22:15.575205  212938 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-152851" [43967435-4b2a-4555-879f-03c39fe3874a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:22:15.575234  212938 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-152851" [088e57fd-b148-4545-93c8-e115d7ce1c9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:22:15.575258  212938 system_pods.go:61] "kube-proxy-m92jb" [118788fe-af1a-46f0-8ff3-7c4a381d36fd] Running
	I1124 14:22:15.575286  212938 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-152851" [59ad535f-afc5-418d-af06-b88121856fc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:22:15.575318  212938 system_pods.go:61] "storage-provisioner" [21b060b9-5567-4a41-8e79-351855fb6f30] Running
	I1124 14:22:15.575339  212938 system_pods.go:74] duration metric: took 8.568163ms to wait for pod list to return data ...
	I1124 14:22:15.575380  212938 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:22:15.582403  212938 default_sa.go:45] found service account: "default"
	I1124 14:22:15.582471  212938 default_sa.go:55] duration metric: took 7.070103ms for default service account to be created ...
	I1124 14:22:15.582503  212938 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:22:15.592894  212938 system_pods.go:86] 8 kube-system pods found
	I1124 14:22:15.592986  212938 system_pods.go:89] "coredns-66bc5c9577-qnfqn" [386494d3-c6d0-46da-898f-5936bcc3bb40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:22:15.593011  212938 system_pods.go:89] "etcd-default-k8s-diff-port-152851" [73849492-289b-4e8a-b132-076ac817ec77] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:22:15.593042  212938 system_pods.go:89] "kindnet-4j292" [b23f3231-3c24-4e8a-bb05-74e475601643] Running
	I1124 14:22:15.593069  212938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-152851" [43967435-4b2a-4555-879f-03c39fe3874a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:22:15.593097  212938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-152851" [088e57fd-b148-4545-93c8-e115d7ce1c9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:22:15.593128  212938 system_pods.go:89] "kube-proxy-m92jb" [118788fe-af1a-46f0-8ff3-7c4a381d36fd] Running
	I1124 14:22:15.593157  212938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-152851" [59ad535f-afc5-418d-af06-b88121856fc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:22:15.593176  212938 system_pods.go:89] "storage-provisioner" [21b060b9-5567-4a41-8e79-351855fb6f30] Running
	I1124 14:22:15.593206  212938 system_pods.go:126] duration metric: took 10.680693ms to wait for k8s-apps to be running ...
	I1124 14:22:15.593235  212938 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:22:15.593309  212938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:22:15.633479  212938 system_svc.go:56] duration metric: took 40.234785ms WaitForService to wait for kubelet
	I1124 14:22:15.633546  212938 kubeadm.go:587] duration metric: took 10.122926814s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:22:15.633581  212938 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:22:15.640437  212938 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:22:15.640513  212938 node_conditions.go:123] node cpu capacity is 2
	I1124 14:22:15.640544  212938 node_conditions.go:105] duration metric: took 6.939599ms to run NodePressure ...
	I1124 14:22:15.640580  212938 start.go:242] waiting for startup goroutines ...
	I1124 14:22:15.640618  212938 start.go:247] waiting for cluster config update ...
	I1124 14:22:15.640645  212938 start.go:256] writing updated cluster config ...
	I1124 14:22:15.640957  212938 ssh_runner.go:195] Run: rm -f paused
	I1124 14:22:15.644922  212938 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:22:15.651193  212938 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qnfqn" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 14:22:17.738542  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	I1124 14:22:17.348726  213874 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:22:17.349080  213874 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-626991 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:22:17.470061  213874 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:22:18.492660  213874 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:22:19.386916  213874 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:22:19.386994  213874 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:22:19.713376  213874 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:22:20.336514  213874 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:22:20.871769  213874 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:22:22.200250  213874 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:22:23.197598  213874 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:22:23.197695  213874 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:22:23.206641  213874 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1124 14:22:20.161246  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:22.674838  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	I1124 14:22:23.211933  213874 out.go:252]   - Booting up control plane ...
	I1124 14:22:23.212053  213874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:22:23.212137  213874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:22:23.212217  213874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:22:23.229585  213874 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:22:23.229693  213874 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:22:23.239109  213874 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:22:23.239969  213874 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:22:23.240261  213874 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:22:23.418534  213874 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:22:23.418657  213874 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:22:25.944899  213874 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.526475711s
	I1124 14:22:25.950776  213874 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:22:25.950872  213874 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 14:22:25.951187  213874 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:22:25.951281  213874 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1124 14:22:25.160237  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:27.656850  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:29.659674  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:32.157745  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:34.159815  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	I1124 14:22:32.159474  213874 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.206603012s
	I1124 14:22:32.953522  213874 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002663639s
	I1124 14:22:34.699347  213874 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.748536308s
	I1124 14:22:34.758204  213874 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:22:34.779011  213874 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:22:34.794805  213874 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:22:34.795009  213874 kubeadm.go:319] [mark-control-plane] Marking the node auto-626991 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:22:34.808076  213874 kubeadm.go:319] [bootstrap-token] Using token: bh15hk.2agovimrivvussod
	I1124 14:22:34.811303  213874 out.go:252]   - Configuring RBAC rules ...
	I1124 14:22:34.811455  213874 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:22:34.820379  213874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:22:34.830071  213874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:22:34.836887  213874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:22:34.842064  213874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:22:34.847216  213874 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:22:35.109802  213874 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:22:35.540842  213874 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:22:36.106833  213874 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:22:36.108481  213874 kubeadm.go:319] 
	I1124 14:22:36.108562  213874 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:22:36.108572  213874 kubeadm.go:319] 
	I1124 14:22:36.108703  213874 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:22:36.108714  213874 kubeadm.go:319] 
	I1124 14:22:36.108739  213874 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:22:36.108803  213874 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:22:36.108864  213874 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:22:36.108876  213874 kubeadm.go:319] 
	I1124 14:22:36.108937  213874 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:22:36.108946  213874 kubeadm.go:319] 
	I1124 14:22:36.108994  213874 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:22:36.109003  213874 kubeadm.go:319] 
	I1124 14:22:36.109064  213874 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:22:36.109152  213874 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:22:36.109225  213874 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:22:36.109234  213874 kubeadm.go:319] 
	I1124 14:22:36.109326  213874 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:22:36.109411  213874 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:22:36.109421  213874 kubeadm.go:319] 
	I1124 14:22:36.109505  213874 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bh15hk.2agovimrivvussod \
	I1124 14:22:36.109615  213874 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f \
	I1124 14:22:36.109641  213874 kubeadm.go:319] 	--control-plane 
	I1124 14:22:36.109651  213874 kubeadm.go:319] 
	I1124 14:22:36.109736  213874 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:22:36.109745  213874 kubeadm.go:319] 
	I1124 14:22:36.109827  213874 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bh15hk.2agovimrivvussod \
	I1124 14:22:36.109933  213874 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f 
	I1124 14:22:36.114992  213874 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:22:36.115210  213874 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:22:36.115317  213874 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:22:36.115339  213874 cni.go:84] Creating CNI manager for ""
	I1124 14:22:36.115347  213874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:22:36.118674  213874 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:22:36.121678  213874 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:22:36.126014  213874 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:22:36.126037  213874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:22:36.145054  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:22:36.544867  213874 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:22:36.545029  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-626991 minikube.k8s.io/updated_at=2025_11_24T14_22_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=auto-626991 minikube.k8s.io/primary=true
	I1124 14:22:36.545034  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:36.760135  213874 ops.go:34] apiserver oom_adj: -16
	I1124 14:22:36.760264  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1124 14:22:36.657134  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:38.657803  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	I1124 14:22:37.260584  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:37.761145  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:38.261048  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:38.760766  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:39.261209  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:39.396092  213874 kubeadm.go:1114] duration metric: took 2.851139361s to wait for elevateKubeSystemPrivileges
	I1124 14:22:39.396124  213874 kubeadm.go:403] duration metric: took 25.708234996s to StartCluster
	I1124 14:22:39.396140  213874 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:39.396199  213874 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:22:39.397136  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:39.397370  213874 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:22:39.397488  213874 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:22:39.397753  213874 config.go:182] Loaded profile config "auto-626991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:22:39.397731  213874 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:22:39.397815  213874 addons.go:70] Setting storage-provisioner=true in profile "auto-626991"
	I1124 14:22:39.397852  213874 addons.go:239] Setting addon storage-provisioner=true in "auto-626991"
	I1124 14:22:39.397878  213874 host.go:66] Checking if "auto-626991" exists ...
	I1124 14:22:39.398349  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:39.398601  213874 addons.go:70] Setting default-storageclass=true in profile "auto-626991"
	I1124 14:22:39.398617  213874 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-626991"
	I1124 14:22:39.398870  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:39.402327  213874 out.go:179] * Verifying Kubernetes components...
	I1124 14:22:39.406749  213874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:39.438069  213874 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:22:39.439647  213874 addons.go:239] Setting addon default-storageclass=true in "auto-626991"
	I1124 14:22:39.439762  213874 host.go:66] Checking if "auto-626991" exists ...
	I1124 14:22:39.442381  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:39.444890  213874 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:22:39.444911  213874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:22:39.444967  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:39.480938  213874 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:22:39.480958  213874 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:22:39.481020  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:39.495223  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:39.516541  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:39.776866  213874 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:22:39.815014  213874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:22:39.957921  213874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:22:40.068815  213874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:22:40.284494  213874 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 14:22:40.287137  213874 node_ready.go:35] waiting up to 15m0s for node "auto-626991" to be "Ready" ...
	I1124 14:22:40.794767  213874 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-626991" context rescaled to 1 replicas
	I1124 14:22:40.813690  213874 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 14:22:40.816425  213874 addons.go:530] duration metric: took 1.418688646s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1124 14:22:41.157341  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:43.657763  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:42.291942  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:44.790660  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:45.661339  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:48.157220  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:47.290190  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:49.790633  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:50.157728  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	I1124 14:22:51.657145  212938 pod_ready.go:94] pod "coredns-66bc5c9577-qnfqn" is "Ready"
	I1124 14:22:51.657171  212938 pod_ready.go:86] duration metric: took 36.005907128s for pod "coredns-66bc5c9577-qnfqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.660102  212938 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.664862  212938 pod_ready.go:94] pod "etcd-default-k8s-diff-port-152851" is "Ready"
	I1124 14:22:51.664885  212938 pod_ready.go:86] duration metric: took 4.754715ms for pod "etcd-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.667201  212938 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.671884  212938 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-152851" is "Ready"
	I1124 14:22:51.671911  212938 pod_ready.go:86] duration metric: took 4.686613ms for pod "kube-apiserver-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.674136  212938 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.855581  212938 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-152851" is "Ready"
	I1124 14:22:51.855609  212938 pod_ready.go:86] duration metric: took 181.448117ms for pod "kube-controller-manager-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:52.056180  212938 pod_ready.go:83] waiting for pod "kube-proxy-m92jb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:52.454880  212938 pod_ready.go:94] pod "kube-proxy-m92jb" is "Ready"
	I1124 14:22:52.454906  212938 pod_ready.go:86] duration metric: took 398.698546ms for pod "kube-proxy-m92jb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:52.655170  212938 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:53.055216  212938 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-152851" is "Ready"
	I1124 14:22:53.055242  212938 pod_ready.go:86] duration metric: took 400.041969ms for pod "kube-scheduler-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:53.055255  212938 pod_ready.go:40] duration metric: took 37.410262957s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:22:53.108086  212938 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:22:53.111472  212938 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-152851" cluster and "default" namespace by default
	W1124 14:22:52.290915  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:54.789948  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:57.290275  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:59.292428  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:23:01.790687  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:23:04.293609  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:23:06.295954  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.192081757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.201732087Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.202072062Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0fe67197ed092f1e4ddf54ac5e4276588c2664aa6d51812725534928b3e90c99/merged/etc/group: no such file or directory"
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.20320855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.232759187Z" level=info msg="Created container c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9rgw/kubernetes-dashboard" id=f22b27d7-7def-45b7-b9f1-84632e79abdf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.234063208Z" level=info msg="Starting container: c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366" id=030f6d7b-1640-4a4a-9bc7-ada492c7c978 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.239815735Z" level=info msg="Started container" PID=1644 containerID=c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9rgw/kubernetes-dashboard id=030f6d7b-1640-4a4a-9bc7-ada492c7c978 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7266de4109737a5005cfa351de750c6e8a6129708fb0519946c624f028985a9
	Nov 24 14:22:46 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:46.035564679Z" level=info msg="Removing container: 3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9" id=5bcd1981-f443-423a-91dc-7c25c0992ea3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:22:46 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:46.043747802Z" level=info msg="Error loading conmon cgroup of container 3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9: cgroup deleted" id=5bcd1981-f443-423a-91dc-7c25c0992ea3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:22:46 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:46.047288171Z" level=info msg="Removed container 3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc/dashboard-metrics-scraper" id=5bcd1981-f443-423a-91dc-7c25c0992ea3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.684639934Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.688449761Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.688482853Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.688502529Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.695842387Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.695875478Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.695898461Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.698913573Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.699073427Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.699119639Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.702355381Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.70238734Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.702413597Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.705506559Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.705538609Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c22e573f34aa0       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   23 seconds ago       Running             kubernetes-dashboard        0                   e7266de410973       kubernetes-dashboard-855c9754f9-n9rgw                  kubernetes-dashboard
	8c79dacfe2ce2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   d2278d8171798       storage-provisioner                                    kube-system
	562bb15e55c2c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   9070a8bad5a5d       dashboard-metrics-scraper-6ffb444bf9-bx8xc             kubernetes-dashboard
	c8989c13e14d0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   5867e364048f7       busybox                                                default
	1c698135a2638       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   f289ff1c5cb02       kube-proxy-m92jb                                       kube-system
	a1ed8677ffbfc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   86bc9657ac448       coredns-66bc5c9577-qnfqn                               kube-system
	39188bea6a564       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   56072b31069d6       kindnet-4j292                                          kube-system
	2989f7e752e95       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   d2278d8171798       storage-provisioner                                    kube-system
	0c3cb362e4d91       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b44afd490cb7e       kube-apiserver-default-k8s-diff-port-152851            kube-system
	79537370e6485       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   0c5530df43674       etcd-default-k8s-diff-port-152851                      kube-system
	1ab258e8f64fc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   9705521b00ad0       kube-scheduler-default-k8s-diff-port-152851            kube-system
	6f961bf321242       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   583d62b5165cb       kube-controller-manager-default-k8s-diff-port-152851   kube-system
	
	
	==> coredns [a1ed8677ffbfcde28812b6af270ab182d73971a25480fded9fd523b9027de0fb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52857 - 64591 "HINFO IN 6377043057621824404.6857678930301169623. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022612221s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-152851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-152851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=default-k8s-diff-port-152851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_20_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:20:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-152851
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:23:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:22:42 +0000   Mon, 24 Nov 2025 14:20:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:22:42 +0000   Mon, 24 Nov 2025 14:20:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:22:42 +0000   Mon, 24 Nov 2025 14:20:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:22:42 +0000   Mon, 24 Nov 2025 14:21:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-152851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                854b5bec-4224-4750-be80-397681d0c7d0
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-qnfqn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-default-k8s-diff-port-152851                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-4j292                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-152851             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-152851    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-m92jb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-152851             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bx8xc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-n9rgw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   Starting                 2m38s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m26s                  node-controller  Node default-k8s-diff-port-152851 event: Registered Node default-k8s-diff-port-152851 in Controller
	  Normal   NodeReady                104s                   kubelet          Node default-k8s-diff-port-152851 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                    node-controller  Node default-k8s-diff-port-152851 event: Registered Node default-k8s-diff-port-152851 in Controller
	
	
	==> dmesg <==
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	[ +49.916713] overlayfs: idmapped layers are currently not supported
	[Nov24 14:19] overlayfs: idmapped layers are currently not supported
	[Nov24 14:20] overlayfs: idmapped layers are currently not supported
	[Nov24 14:21] overlayfs: idmapped layers are currently not supported
	[ +26.692408] overlayfs: idmapped layers are currently not supported
	[Nov24 14:22] overlayfs: idmapped layers are currently not supported
	[ +21.257761] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [79537370e6485bd82564920e391bc4bdfa906e6f8c7d96a71aac5f90ea93fca2] <==
	{"level":"warn","ts":"2025-11-24T14:22:09.240845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.270420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.308977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.346935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.391641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.421443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.465728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.491719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.532801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.541901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.592849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.630641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.646491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.678143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.729972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.849414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.898432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.973491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:10.015642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:10.172085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:13.117790Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.494448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-152851\" limit:1 ","response":"range_response_count:1 size:7752"}
	{"level":"info","ts":"2025-11-24T14:22:13.117847Z","caller":"traceutil/trace.go:172","msg":"trace[1151595630] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-152851; range_end:; response_count:1; response_revision:523; }","duration":"111.56206ms","start":"2025-11-24T14:22:13.006273Z","end":"2025-11-24T14:22:13.117835Z","steps":["trace[1151595630] 'agreement among raft nodes before linearized reading'  (duration: 92.729636ms)","trace[1151595630] 'range keys from in-memory index tree'  (duration: 18.68139ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T14:22:13.118047Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.726828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient\" limit:1 ","response":"range_response_count:1 size:718"}
	{"level":"info","ts":"2025-11-24T14:22:13.118066Z","caller":"traceutil/trace.go:172","msg":"trace[1048545038] range","detail":"{range_begin:/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient; range_end:; response_count:1; response_revision:524; }","duration":"111.748826ms","start":"2025-11-24T14:22:13.006312Z","end":"2025-11-24T14:22:13.118061Z","steps":["trace[1048545038] 'agreement among raft nodes before linearized reading'  (duration: 111.682364ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T14:22:13.118163Z","caller":"traceutil/trace.go:172","msg":"trace[1234685541] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"129.814916ms","start":"2025-11-24T14:22:12.988340Z","end":"2025-11-24T14:22:13.118155Z","steps":["trace[1234685541] 'process raft request'  (duration: 110.724799ms)","trace[1234685541] 'compare'  (duration: 18.367804ms)"],"step_count":2}
	
	
	==> kernel <==
	 14:23:08 up  2:05,  0 user,  load average: 4.55, 3.50, 2.78
	Linux default-k8s-diff-port-152851 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [39188bea6a564382a6980903b479f266ffccf33b661be945f494d30c4d35a2a1] <==
	I1124 14:22:13.492845       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:22:13.493069       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:22:13.493190       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:22:13.493202       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:22:13.493212       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:22:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:22:13.709192       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:22:13.709220       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:22:13.709230       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:22:13.709533       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:22:43.684531       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:22:43.709793       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:22:43.709890       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:22:43.709991       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:22:45.113176       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:22:45.113215       1 metrics.go:72] Registering metrics
	I1124 14:22:45.113274       1 controller.go:711] "Syncing nftables rules"
	I1124 14:22:53.684123       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:22:53.684343       1 main.go:301] handling current node
	I1124 14:23:03.692846       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:23:03.692884       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0c3cb362e4d9189052f992f04fe50fac0c17ff2bd5f72ef4be40e433331ba291] <==
	I1124 14:22:11.751191       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:22:11.751227       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:22:11.765960       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:22:11.766034       1 aggregator.go:171] initial CRD sync complete...
	I1124 14:22:11.766042       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 14:22:11.766049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:22:11.766055       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:22:11.773809       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:22:11.773853       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 14:22:11.774808       1 cache.go:39] Caches are synced for LocalAvailability controller
	E1124 14:22:11.796848       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:22:11.844550       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 14:22:11.844601       1 policy_source.go:240] refreshing policies
	I1124 14:22:11.875506       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:22:12.383199       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:22:12.677958       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:22:14.036508       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:22:14.545811       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:22:14.676397       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:22:14.773519       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:22:14.992561       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.140.40"}
	I1124 14:22:15.038841       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.218.156"}
	I1124 14:22:17.151588       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:22:17.451117       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:22:17.503656       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6f961bf32124218ddecf96482422e8b5741e2a3a4f6241f531f98636f588acab] <==
	I1124 14:22:17.004155       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 14:22:17.004239       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 14:22:17.004312       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:22:17.004369       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:22:17.004398       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:22:17.009147       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 14:22:17.012542       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 14:22:17.015104       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 14:22:17.015312       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:22:17.015534       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-152851"
	I1124 14:22:17.015658       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:22:17.019894       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 14:22:17.022434       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:22:17.024102       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:22:17.028942       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:22:17.030377       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:22:17.030473       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:22:17.030507       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:22:17.036929       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:22:17.042573       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:22:17.045806       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:22:17.045924       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:22:17.045938       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:22:17.048942       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:22:17.055162       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [1c698135a263826ddf532798c50e5a822a0a3c1879d5551637c0335e965e578a] <==
	I1124 14:22:15.254032       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:22:15.333143       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:22:15.444386       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:22:15.444501       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:22:15.444627       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:22:15.470653       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:22:15.470769       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:22:15.474710       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:22:15.475109       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:22:15.475288       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:22:15.476690       1 config.go:200] "Starting service config controller"
	I1124 14:22:15.476751       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:22:15.476811       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:22:15.476838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:22:15.476874       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:22:15.476936       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:22:15.477633       1 config.go:309] "Starting node config controller"
	I1124 14:22:15.480146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:22:15.480210       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:22:15.579455       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:22:15.600798       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:22:15.600830       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1ab258e8f64fcb7b1fb7769531303c2b402470d9e0ae16ffe932a98857f4fd05] <==
	I1124 14:22:10.587447       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:22:15.166185       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:22:15.166295       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:22:15.172176       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:22:15.172368       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:22:15.172428       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:22:15.172479       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:22:15.180805       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:22:15.187442       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:22:15.181030       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:22:15.187605       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:22:15.272602       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 14:22:15.288049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:22:15.288124       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:22:17 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:17.662603     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/81506cf9-bce8-4955-8685-686c2fe938fb-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-n9rgw\" (UID: \"81506cf9-bce8-4955-8685-686c2fe938fb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9rgw"
	Nov 24 14:22:17 default-k8s-diff-port-152851 kubelet[782]: W1124 14:22:17.981018     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/crio-9070a8bad5a5d83c701317ea5de703851d0e05c2181f492428b4007d48740164 WatchSource:0}: Error finding container 9070a8bad5a5d83c701317ea5de703851d0e05c2181f492428b4007d48740164: Status 404 returned error can't find the container with id 9070a8bad5a5d83c701317ea5de703851d0e05c2181f492428b4007d48740164
	Nov 24 14:22:18 default-k8s-diff-port-152851 kubelet[782]: W1124 14:22:18.001880     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/crio-e7266de4109737a5005cfa351de750c6e8a6129708fb0519946c624f028985a9 WatchSource:0}: Error finding container e7266de4109737a5005cfa351de750c6e8a6129708fb0519946c624f028985a9: Status 404 returned error can't find the container with id e7266de4109737a5005cfa351de750c6e8a6129708fb0519946c624f028985a9
	Nov 24 14:22:21 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:21.292265     782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 14:22:24 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:24.933985     782 scope.go:117] "RemoveContainer" containerID="946383ed71ec7606ee61d3904da95ae5f85dfda4dd465a11f11d07701cdc6ebe"
	Nov 24 14:22:25 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:25.938940     782 scope.go:117] "RemoveContainer" containerID="946383ed71ec7606ee61d3904da95ae5f85dfda4dd465a11f11d07701cdc6ebe"
	Nov 24 14:22:25 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:25.940081     782 scope.go:117] "RemoveContainer" containerID="3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9"
	Nov 24 14:22:25 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:25.940251     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:22:26 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:26.944308     782 scope.go:117] "RemoveContainer" containerID="3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9"
	Nov 24 14:22:26 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:26.945045     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:22:27 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:27.946524     782 scope.go:117] "RemoveContainer" containerID="3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9"
	Nov 24 14:22:27 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:27.947139     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:22:43 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:43.692294     782 scope.go:117] "RemoveContainer" containerID="3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9"
	Nov 24 14:22:45 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:45.023220     782 scope.go:117] "RemoveContainer" containerID="2989f7e752e954727b85260ced94cba345baeb3ca207485296c9174eb09dfd54"
	Nov 24 14:22:46 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:46.030718     782 scope.go:117] "RemoveContainer" containerID="3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9"
	Nov 24 14:22:46 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:46.031002     782 scope.go:117] "RemoveContainer" containerID="562bb15e55c2c7efd782aafdd77f42b761f21c9e5ed723b3a3473ff620847201"
	Nov 24 14:22:46 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:46.031175     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:22:46 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:46.101547     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9rgw" podStartSLOduration=1.926383425 podStartE2EDuration="29.101527643s" podCreationTimestamp="2025-11-24 14:22:17 +0000 UTC" firstStartedPulling="2025-11-24 14:22:18.006271969 +0000 UTC m=+13.782792411" lastFinishedPulling="2025-11-24 14:22:45.181416179 +0000 UTC m=+40.957936629" observedRunningTime="2025-11-24 14:22:46.071738579 +0000 UTC m=+41.848259029" watchObservedRunningTime="2025-11-24 14:22:46.101527643 +0000 UTC m=+41.878048085"
	Nov 24 14:22:47 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:47.923113     782 scope.go:117] "RemoveContainer" containerID="562bb15e55c2c7efd782aafdd77f42b761f21c9e5ed723b3a3473ff620847201"
	Nov 24 14:22:47 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:47.923298     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:22:59 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:59.691795     782 scope.go:117] "RemoveContainer" containerID="562bb15e55c2c7efd782aafdd77f42b761f21c9e5ed723b3a3473ff620847201"
	Nov 24 14:22:59 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:59.691987     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:23:05 default-k8s-diff-port-152851 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:23:05 default-k8s-diff-port-152851 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:23:05 default-k8s-diff-port-152851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366] <==
	2025/11/24 14:22:45 Using namespace: kubernetes-dashboard
	2025/11/24 14:22:45 Using in-cluster config to connect to apiserver
	2025/11/24 14:22:45 Using secret token for csrf signing
	2025/11/24 14:22:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:22:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:22:45 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 14:22:45 Generating JWE encryption key
	2025/11/24 14:22:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:22:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:22:45 Initializing JWE encryption key from synchronized object
	2025/11/24 14:22:45 Creating in-cluster Sidecar client
	2025/11/24 14:22:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:22:45 Serving insecurely on HTTP port: 9090
	2025/11/24 14:22:45 Starting overwatch
	
	
	==> storage-provisioner [2989f7e752e954727b85260ced94cba345baeb3ca207485296c9174eb09dfd54] <==
	I1124 14:22:14.046706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:22:44.050528       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8c79dacfe2ce221fbea9a2486ec69afabc6d46c3c4d82441e90690036efd9d52] <==
	I1124 14:22:45.179280       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:22:45.199177       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:22:45.199517       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:22:45.207342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:22:48.662677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:22:52.922949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:22:56.521410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:22:59.574728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:02.597429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:02.602975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:23:02.603124       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:23:02.603888       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d5980bf8-00ae-4d19-87f0-18805e995386", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-152851_b86a2b06-7d3b-4b45-b327-820f8093d619 became leader
	I1124 14:23:02.605064       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-152851_b86a2b06-7d3b-4b45-b327-820f8093d619!
	W1124 14:23:02.605181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:02.618435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:23:02.707811       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-152851_b86a2b06-7d3b-4b45-b327-820f8093d619!
	W1124 14:23:04.621846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:04.627062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:06.630787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:06.635278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:08.639861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:08.646477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851: exit status 2 (356.059126ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-152851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-152851
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-152851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a",
	        "Created": "2025-11-24T14:20:10.30310035Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213119,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:21:54.577260132Z",
	            "FinishedAt": "2025-11-24T14:21:53.416493117Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/hostname",
	        "HostsPath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/hosts",
	        "LogPath": "/var/lib/docker/containers/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a-json.log",
	        "Name": "/default-k8s-diff-port-152851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-152851:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-152851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a",
	                "LowerDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b-init/diff:/var/lib/docker/overlay2/13a44a1c9c7389f495d930a01834ff28273a0e5eb2fe3411fc4db3ff0709690d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47ed7fba18527758b18563a9035241518dd071ecf5409d539ee7b229eae0305b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-152851",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-152851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-152851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-152851",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-152851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ba22ad00af0daff556271f44a387697422a1062f01305be1cf688a6e70e3cdb2",
	            "SandboxKey": "/var/run/docker/netns/ba22ad00af0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-152851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:e6:ab:e5:98:ed",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "13603eff9881a10c42cb9841bf658813f5fbc60eabf578cb82466b4c09374f11",
	                    "EndpointID": "7bc90bd8c48475fd9c3d578ddf891b9e4adc87e851032e6b3a5bc7a50d5924f2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-152851",
	                        "bb380e4fa749"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851: exit status 2 (358.925248ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-152851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-152851 logs -n 25: (1.369045637s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-444317 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:19 UTC │                     │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p no-preload-444317                                                                                                                                                                                                                          │ no-preload-444317            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p disable-driver-mounts-799392                                                                                                                                                                                                               │ disable-driver-mounts-799392 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ image   │ embed-certs-720293 image list --format=json                                                                                                                                                                                                   │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ pause   │ -p embed-certs-720293 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │                     │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ delete  │ -p embed-certs-720293                                                                                                                                                                                                                         │ embed-certs-720293           │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:20 UTC │
	│ start   │ -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:20 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-948249 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ stop    │ -p newest-cni-948249 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-948249 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ start   │ -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-152851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-152851 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ image   │ newest-cni-948249 image list --format=json                                                                                                                                                                                                    │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ pause   │ -p newest-cni-948249 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-152851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ start   │ -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:22 UTC │
	│ delete  │ -p newest-cni-948249                                                                                                                                                                                                                          │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ delete  │ -p newest-cni-948249                                                                                                                                                                                                                          │ newest-cni-948249            │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │ 24 Nov 25 14:21 UTC │
	│ start   │ -p auto-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-626991                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:21 UTC │                     │
	│ image   │ default-k8s-diff-port-152851 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:23 UTC │ 24 Nov 25 14:23 UTC │
	│ pause   │ -p default-k8s-diff-port-152851 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-152851 │ jenkins │ v1.37.0 │ 24 Nov 25 14:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:21:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:21:57.112296  213874 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:21:57.112478  213874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:57.112491  213874 out.go:374] Setting ErrFile to fd 2...
	I1124 14:21:57.112498  213874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:21:57.112811  213874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:21:57.113303  213874 out.go:368] Setting JSON to false
	I1124 14:21:57.114245  213874 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7469,"bootTime":1763986649,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:21:57.114316  213874 start.go:143] virtualization:  
	I1124 14:21:57.118107  213874 out.go:179] * [auto-626991] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:21:57.121444  213874 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:21:57.121507  213874 notify.go:221] Checking for updates...
	I1124 14:21:57.127576  213874 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:21:57.130724  213874 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:21:57.133783  213874 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:21:57.136708  213874 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:21:57.139549  213874 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:21:57.143065  213874 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:57.143197  213874 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:21:57.168024  213874 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:21:57.168155  213874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:21:57.232809  213874 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 14:21:57.223266812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:21:57.232915  213874 docker.go:319] overlay module found
	I1124 14:21:57.236224  213874 out.go:179] * Using the docker driver based on user configuration
	I1124 14:21:57.239181  213874 start.go:309] selected driver: docker
	I1124 14:21:57.239202  213874 start.go:927] validating driver "docker" against <nil>
	I1124 14:21:57.239216  213874 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:21:57.240061  213874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:21:57.290589  213874 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 14:21:57.281138441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:21:57.290757  213874 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:21:57.291016  213874 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:21:57.293889  213874 out.go:179] * Using Docker driver with root privileges
	I1124 14:21:57.296782  213874 cni.go:84] Creating CNI manager for ""
	I1124 14:21:57.296863  213874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:21:57.296877  213874 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:21:57.296954  213874 start.go:353] cluster config:
	{Name:auto-626991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-626991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1124 14:21:57.300233  213874 out.go:179] * Starting "auto-626991" primary control-plane node in "auto-626991" cluster
	I1124 14:21:57.302941  213874 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:21:57.305898  213874 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:21:57.308820  213874 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:21:57.308867  213874 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 14:21:57.308878  213874 cache.go:65] Caching tarball of preloaded images
	I1124 14:21:57.308909  213874 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:21:57.308977  213874 preload.go:238] Found /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1124 14:21:57.308989  213874 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:21:57.309107  213874 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/config.json ...
	I1124 14:21:57.309126  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/config.json: {Name:mkcc529a20f79a6894765fd4690705e536e8a416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:21:57.328800  213874 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:21:57.328823  213874 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:21:57.328844  213874 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:21:57.328876  213874 start.go:360] acquireMachinesLock for auto-626991: {Name:mk763e6682356d95f9cf88abe6cabc12d66c573c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:21:57.328981  213874 start.go:364] duration metric: took 84.661µs to acquireMachinesLock for "auto-626991"
	I1124 14:21:57.329011  213874 start.go:93] Provisioning new machine with config: &{Name:auto-626991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-626991 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:21:57.329092  213874 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:21:54.538107  212938 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-152851" ...
	I1124 14:21:54.538194  212938 cli_runner.go:164] Run: docker start default-k8s-diff-port-152851
	I1124 14:21:54.866512  212938 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:21:54.921096  212938 kic.go:430] container "default-k8s-diff-port-152851" state is running.
	I1124 14:21:54.921481  212938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:21:54.964203  212938 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/config.json ...
	I1124 14:21:54.965019  212938 machine.go:94] provisionDockerMachine start ...
	I1124 14:21:54.965131  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:54.995485  212938 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:54.995866  212938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 14:21:54.995885  212938 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:21:55.001290  212938 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:21:58.183129  212938 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-152851
	
	I1124 14:21:58.183152  212938 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-152851"
	I1124 14:21:58.183214  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:58.224155  212938 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:58.224473  212938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 14:21:58.224484  212938 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-152851 && echo "default-k8s-diff-port-152851" | sudo tee /etc/hostname
	I1124 14:21:58.404453  212938 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-152851
	
	I1124 14:21:58.404527  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:58.428415  212938 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:58.428734  212938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 14:21:58.428754  212938 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-152851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-152851/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-152851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:21:58.583606  212938 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:21:58.583633  212938 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:21:58.583700  212938 ubuntu.go:190] setting up certificates
	I1124 14:21:58.583711  212938 provision.go:84] configureAuth start
	I1124 14:21:58.583789  212938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:21:58.607183  212938 provision.go:143] copyHostCerts
	I1124 14:21:58.607258  212938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:21:58.607277  212938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:21:58.607387  212938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:21:58.607512  212938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:21:58.607525  212938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:21:58.607561  212938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:21:58.607638  212938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:21:58.607649  212938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:21:58.607680  212938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:21:58.607737  212938 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-152851 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-152851 localhost minikube]
	I1124 14:21:58.733801  212938 provision.go:177] copyRemoteCerts
	I1124 14:21:58.733893  212938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:21:58.733959  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:58.752728  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:21:58.860459  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 14:21:58.880180  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:21:58.899386  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:21:58.918296  212938 provision.go:87] duration metric: took 334.537171ms to configureAuth
	I1124 14:21:58.918391  212938 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:21:58.918695  212938 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:21:58.918939  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:58.952185  212938 main.go:143] libmachine: Using SSH client type: native
	I1124 14:21:58.952918  212938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 14:21:58.952960  212938 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:21:57.332560  213874 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:21:57.332849  213874 start.go:159] libmachine.API.Create for "auto-626991" (driver="docker")
	I1124 14:21:57.332885  213874 client.go:173] LocalClient.Create starting
	I1124 14:21:57.332968  213874 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem
	I1124 14:21:57.333009  213874 main.go:143] libmachine: Decoding PEM data...
	I1124 14:21:57.333034  213874 main.go:143] libmachine: Parsing certificate...
	I1124 14:21:57.333096  213874 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem
	I1124 14:21:57.333117  213874 main.go:143] libmachine: Decoding PEM data...
	I1124 14:21:57.333132  213874 main.go:143] libmachine: Parsing certificate...
	I1124 14:21:57.333510  213874 cli_runner.go:164] Run: docker network inspect auto-626991 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:21:57.350020  213874 cli_runner.go:211] docker network inspect auto-626991 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:21:57.350103  213874 network_create.go:284] running [docker network inspect auto-626991] to gather additional debugging logs...
	I1124 14:21:57.350123  213874 cli_runner.go:164] Run: docker network inspect auto-626991
	W1124 14:21:57.366099  213874 cli_runner.go:211] docker network inspect auto-626991 returned with exit code 1
	I1124 14:21:57.366138  213874 network_create.go:287] error running [docker network inspect auto-626991]: docker network inspect auto-626991: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-626991 not found
	I1124 14:21:57.366154  213874 network_create.go:289] output of [docker network inspect auto-626991]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-626991 not found
	
	** /stderr **
	I1124 14:21:57.366251  213874 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:21:57.384460  213874 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3087ee9f269 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:07:60:94:e6:54} reservation:<nil>}
	I1124 14:21:57.384803  213874 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-87dca5a19352 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:6c:c1:85:45:94} reservation:<nil>}
	I1124 14:21:57.385156  213874 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9e995bd1b79e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:f1:73:f5:6f:cf} reservation:<nil>}
	I1124 14:21:57.385423  213874 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-13603eff9881 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:0b:69:f9:14:50} reservation:<nil>}
	I1124 14:21:57.385824  213874 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e1760}
	I1124 14:21:57.385846  213874 network_create.go:124] attempt to create docker network auto-626991 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 14:21:57.385902  213874 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-626991 auto-626991
	I1124 14:21:57.443679  213874 network_create.go:108] docker network auto-626991 192.168.85.0/24 created
	I1124 14:21:57.443715  213874 kic.go:121] calculated static IP "192.168.85.2" for the "auto-626991" container
	I1124 14:21:57.443788  213874 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:21:57.460620  213874 cli_runner.go:164] Run: docker volume create auto-626991 --label name.minikube.sigs.k8s.io=auto-626991 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:21:57.477462  213874 oci.go:103] Successfully created a docker volume auto-626991
	I1124 14:21:57.477543  213874 cli_runner.go:164] Run: docker run --rm --name auto-626991-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-626991 --entrypoint /usr/bin/test -v auto-626991:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:21:58.033230  213874 oci.go:107] Successfully prepared a docker volume auto-626991
	I1124 14:21:58.033321  213874 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:21:58.033336  213874 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:21:58.033411  213874 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-626991:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:21:59.359890  212938 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:21:59.359914  212938 machine.go:97] duration metric: took 4.394873143s to provisionDockerMachine
	I1124 14:21:59.359925  212938 start.go:293] postStartSetup for "default-k8s-diff-port-152851" (driver="docker")
	I1124 14:21:59.359935  212938 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:21:59.360015  212938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:21:59.360059  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:59.383585  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:21:59.503374  212938 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:21:59.507479  212938 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:21:59.507503  212938 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:21:59.507514  212938 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:21:59.507604  212938 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:21:59.507678  212938 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:21:59.507778  212938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:21:59.515293  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:21:59.532938  212938 start.go:296] duration metric: took 172.997936ms for postStartSetup
	I1124 14:21:59.533101  212938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:21:59.533176  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:59.556813  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:21:59.669065  212938 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:21:59.674177  212938 fix.go:56] duration metric: took 5.171513144s for fixHost
	I1124 14:21:59.674201  212938 start.go:83] releasing machines lock for "default-k8s-diff-port-152851", held for 5.171566584s
	I1124 14:21:59.674279  212938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-152851
	I1124 14:21:59.693017  212938 ssh_runner.go:195] Run: cat /version.json
	I1124 14:21:59.693069  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:59.693096  212938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:21:59.693168  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:21:59.728674  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:21:59.736549  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:21:59.934602  212938 ssh_runner.go:195] Run: systemctl --version
	I1124 14:21:59.941653  212938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:21:59.982049  212938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:21:59.986874  212938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:21:59.986950  212938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:21:59.995899  212938 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:21:59.995919  212938 start.go:496] detecting cgroup driver to use...
	I1124 14:21:59.995949  212938 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:21:59.995995  212938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:22:00.015278  212938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:22:00.036407  212938 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:22:00.036528  212938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:22:00.059047  212938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:22:00.077108  212938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:22:00.409602  212938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:22:00.569066  212938 docker.go:234] disabling docker service ...
	I1124 14:22:00.569193  212938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:22:00.587297  212938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:22:00.602801  212938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:22:00.762853  212938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:22:00.951982  212938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:22:00.967657  212938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:22:00.985913  212938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:22:00.986026  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:00.995886  212938 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:22:00.996010  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.007511  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.019378  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.029975  212938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:22:01.039882  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.050560  212938 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.060716  212938 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:01.071115  212938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:22:01.080721  212938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:22:01.089681  212938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:01.249309  212938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:22:03.372259  212938 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.122909114s)
	I1124 14:22:03.372352  212938 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:22:03.372417  212938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:22:03.379714  212938 start.go:564] Will wait 60s for crictl version
	I1124 14:22:03.379784  212938 ssh_runner.go:195] Run: which crictl
	I1124 14:22:03.386118  212938 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:22:03.431877  212938 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:22:03.431979  212938 ssh_runner.go:195] Run: crio --version
	I1124 14:22:03.467468  212938 ssh_runner.go:195] Run: crio --version
	I1124 14:22:03.506634  212938 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:22:03.507792  212938 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-152851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:22:03.526466  212938 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:22:03.531687  212938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:22:03.544713  212938 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:22:03.544840  212938 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:22:03.544888  212938 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:22:03.637182  212938 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:22:03.637202  212938 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:22:03.637254  212938 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:22:03.701499  212938 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:22:03.701519  212938 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:22:03.701526  212938 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 14:22:03.701623  212938 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-152851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:22:03.701702  212938 ssh_runner.go:195] Run: crio config
	I1124 14:22:03.840084  212938 cni.go:84] Creating CNI manager for ""
	I1124 14:22:03.840103  212938 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:22:03.840118  212938 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:22:03.840171  212938 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-152851 NodeName:default-k8s-diff-port-152851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:22:03.840291  212938 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-152851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:22:03.840360  212938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:22:03.857652  212938 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:22:03.857729  212938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:22:03.869884  212938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 14:22:03.892019  212938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:22:03.929624  212938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1124 14:22:03.952807  212938 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:22:03.967246  212938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:22:03.985265  212938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:04.185058  212938 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:22:04.210128  212938 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851 for IP: 192.168.76.2
	I1124 14:22:04.210146  212938 certs.go:195] generating shared ca certs ...
	I1124 14:22:04.210162  212938 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:04.210307  212938 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:22:04.210360  212938 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:22:04.210368  212938 certs.go:257] generating profile certs ...
	I1124 14:22:04.210454  212938 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.key
	I1124 14:22:04.210532  212938 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key.ec9a3231
	I1124 14:22:04.210571  212938 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key
	I1124 14:22:04.210687  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:22:04.210723  212938 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:22:04.210732  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:22:04.210768  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:22:04.210792  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:22:04.210819  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:22:04.210864  212938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:22:04.211577  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:22:04.275796  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:22:04.338011  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:22:04.370874  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:22:04.396771  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 14:22:04.428092  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:22:04.472372  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:22:04.532591  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:22:04.594977  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:22:04.686872  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:22:04.731873  212938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:22:04.766122  212938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:22:04.802940  212938 ssh_runner.go:195] Run: openssl version
	I1124 14:22:04.810628  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:22:04.823072  212938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:22:04.827880  212938 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:22:04.828020  212938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:22:04.876150  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:22:04.887558  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:22:04.897014  212938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:04.901457  212938 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:04.901570  212938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:04.946597  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:22:04.956191  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:22:04.966117  212938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:22:04.970360  212938 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:22:04.970427  212938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:22:05.014900  212938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:22:05.023709  212938 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:22:05.028496  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:22:05.072067  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:22:05.131036  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:22:05.190902  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:22:05.246132  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:22:05.304200  212938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:22:05.402037  212938 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-152851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-152851 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:22:05.402133  212938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:22:05.402197  212938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:22:05.451390  212938 cri.go:89] found id: "0c3cb362e4d9189052f992f04fe50fac0c17ff2bd5f72ef4be40e433331ba291"
	I1124 14:22:05.451409  212938 cri.go:89] found id: "79537370e6485bd82564920e391bc4bdfa906e6f8c7d96a71aac5f90ea93fca2"
	I1124 14:22:05.451414  212938 cri.go:89] found id: "1ab258e8f64fcb7b1fb7769531303c2b402470d9e0ae16ffe932a98857f4fd05"
	I1124 14:22:05.451417  212938 cri.go:89] found id: "6f961bf32124218ddecf96482422e8b5741e2a3a4f6241f531f98636f588acab"
	I1124 14:22:05.451420  212938 cri.go:89] found id: ""
	I1124 14:22:05.451466  212938 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 14:22:05.470571  212938 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:22:05Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:22:05.470694  212938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:22:05.482215  212938 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:22:05.482280  212938 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:22:05.482361  212938 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:22:05.492856  212938 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:22:05.493342  212938 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-152851" does not appear in /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:22:05.493511  212938 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-2805/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-152851" cluster setting kubeconfig missing "default-k8s-diff-port-152851" context setting]
	I1124 14:22:05.493865  212938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:05.495346  212938 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:22:05.507808  212938 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 14:22:05.507887  212938 kubeadm.go:602] duration metric: took 25.58908ms to restartPrimaryControlPlane
	I1124 14:22:05.507922  212938 kubeadm.go:403] duration metric: took 105.893112ms to StartCluster
	I1124 14:22:05.507964  212938 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:05.508068  212938 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:22:05.508824  212938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:05.509320  212938 config.go:182] Loaded profile config "default-k8s-diff-port-152851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:22:05.509453  212938 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:22:05.509550  212938 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-152851"
	I1124 14:22:05.509586  212938 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-152851"
	W1124 14:22:05.509630  212938 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:22:05.509676  212938 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:22:05.510348  212938 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:22:05.510559  212938 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:22:05.511075  212938 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-152851"
	I1124 14:22:05.511091  212938 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-152851"
	W1124 14:22:05.511098  212938 addons.go:248] addon dashboard should already be in state true
	I1124 14:22:05.511118  212938 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:22:05.511581  212938 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:22:05.511923  212938 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-152851"
	I1124 14:22:05.511939  212938 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-152851"
	I1124 14:22:05.512203  212938 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:22:05.515258  212938 out.go:179] * Verifying Kubernetes components...
	I1124 14:22:05.519109  212938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:05.553010  212938 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:22:05.554292  212938 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:22:05.554313  212938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:22:05.554387  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:22:05.575903  212938 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 14:22:05.577036  212938 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:22:03.167196  213874 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-626991:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.133724251s)
	I1124 14:22:03.167232  213874 kic.go:203] duration metric: took 5.133892852s to extract preloaded images to volume ...
	W1124 14:22:03.167391  213874 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 14:22:03.167515  213874 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:22:03.251572  213874 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-626991 --name auto-626991 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-626991 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-626991 --network auto-626991 --ip 192.168.85.2 --volume auto-626991:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:22:03.558028  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Running}}
	I1124 14:22:03.585778  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:03.614697  213874 cli_runner.go:164] Run: docker exec auto-626991 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:22:03.692018  213874 oci.go:144] the created container "auto-626991" has a running status.
	I1124 14:22:03.692044  213874 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa...
	I1124 14:22:03.868968  213874 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:22:03.904706  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:03.931335  213874 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:22:03.931369  213874 kic_runner.go:114] Args: [docker exec --privileged auto-626991 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:22:03.995063  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:04.019068  213874 machine.go:94] provisionDockerMachine start ...
	I1124 14:22:04.019177  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:04.045694  213874 main.go:143] libmachine: Using SSH client type: native
	I1124 14:22:04.046042  213874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 14:22:04.046051  213874 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:22:04.047735  213874 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:22:05.579453  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 14:22:05.579483  212938 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 14:22:05.579549  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:22:05.588291  212938 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-152851"
	W1124 14:22:05.588321  212938 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:22:05.588346  212938 host.go:66] Checking if "default-k8s-diff-port-152851" exists ...
	I1124 14:22:05.588783  212938 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-152851 --format={{.State.Status}}
	I1124 14:22:05.616806  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:22:05.635324  212938 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:22:05.635348  212938 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:22:05.635496  212938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-152851
	I1124 14:22:05.637826  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:22:05.670029  212938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/default-k8s-diff-port-152851/id_rsa Username:docker}
	I1124 14:22:05.821336  212938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:22:05.863732  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 14:22:05.863821  212938 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 14:22:05.888626  212938 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:22:05.922906  212938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:22:05.945666  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 14:22:05.945740  212938 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 14:22:06.041146  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 14:22:06.041221  212938 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 14:22:06.097892  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 14:22:06.097962  212938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 14:22:06.162376  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 14:22:06.162453  212938 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 14:22:06.196841  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 14:22:06.196920  212938 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 14:22:06.231679  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 14:22:06.231761  212938 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 14:22:06.257050  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 14:22:06.257112  212938 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 14:22:06.277194  212938 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:22:06.277271  212938 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 14:22:06.302608  212938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:22:07.234983  213874 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-626991
	
	I1124 14:22:07.235008  213874 ubuntu.go:182] provisioning hostname "auto-626991"
	I1124 14:22:07.235072  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:07.262773  213874 main.go:143] libmachine: Using SSH client type: native
	I1124 14:22:07.263094  213874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 14:22:07.263114  213874 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-626991 && echo "auto-626991" | sudo tee /etc/hostname
	I1124 14:22:07.459927  213874 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-626991
	
	I1124 14:22:07.460008  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:07.504295  213874 main.go:143] libmachine: Using SSH client type: native
	I1124 14:22:07.504623  213874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 14:22:07.504649  213874 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-626991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-626991/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-626991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:22:07.691584  213874 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:22:07.691613  213874 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2805/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2805/.minikube}
	I1124 14:22:07.691648  213874 ubuntu.go:190] setting up certificates
	I1124 14:22:07.691657  213874 provision.go:84] configureAuth start
	I1124 14:22:07.691719  213874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-626991
	I1124 14:22:07.725227  213874 provision.go:143] copyHostCerts
	I1124 14:22:07.725296  213874 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem, removing ...
	I1124 14:22:07.725312  213874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem
	I1124 14:22:07.725398  213874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/ca.pem (1078 bytes)
	I1124 14:22:07.725498  213874 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem, removing ...
	I1124 14:22:07.725512  213874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem
	I1124 14:22:07.725544  213874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/cert.pem (1123 bytes)
	I1124 14:22:07.725612  213874 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem, removing ...
	I1124 14:22:07.725622  213874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem
	I1124 14:22:07.725650  213874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2805/.minikube/key.pem (1675 bytes)
	I1124 14:22:07.725710  213874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem org=jenkins.auto-626991 san=[127.0.0.1 192.168.85.2 auto-626991 localhost minikube]
	I1124 14:22:08.211278  213874 provision.go:177] copyRemoteCerts
	I1124 14:22:08.211432  213874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:22:08.211504  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:08.228427  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:08.341632  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:22:08.368710  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 14:22:08.389020  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:22:08.408365  213874 provision.go:87] duration metric: took 716.684234ms to configureAuth
	I1124 14:22:08.408388  213874 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:22:08.408578  213874 config.go:182] Loaded profile config "auto-626991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:22:08.408680  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:08.445227  213874 main.go:143] libmachine: Using SSH client type: native
	I1124 14:22:08.445538  213874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1124 14:22:08.445553  213874 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:22:08.871041  213874 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:22:08.871063  213874 machine.go:97] duration metric: took 4.851975904s to provisionDockerMachine
	I1124 14:22:08.871074  213874 client.go:176] duration metric: took 11.538178873s to LocalClient.Create
	I1124 14:22:08.871085  213874 start.go:167] duration metric: took 11.538235784s to libmachine.API.Create "auto-626991"
	I1124 14:22:08.871092  213874 start.go:293] postStartSetup for "auto-626991" (driver="docker")
	I1124 14:22:08.871101  213874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:22:08.871167  213874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:22:08.871213  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:08.904432  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:09.024804  213874 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:22:09.028761  213874 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:22:09.028791  213874 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:22:09.028803  213874 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/addons for local assets ...
	I1124 14:22:09.028858  213874 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2805/.minikube/files for local assets ...
	I1124 14:22:09.028946  213874 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem -> 46112.pem in /etc/ssl/certs
	I1124 14:22:09.029054  213874 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:22:09.045156  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:22:09.083003  213874 start.go:296] duration metric: took 211.897182ms for postStartSetup
	I1124 14:22:09.083494  213874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-626991
	I1124 14:22:09.108914  213874 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/config.json ...
	I1124 14:22:09.109200  213874 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:22:09.109252  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:09.132900  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:09.253451  213874 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:22:09.259225  213874 start.go:128] duration metric: took 11.930118763s to createHost
	I1124 14:22:09.259251  213874 start.go:83] releasing machines lock for "auto-626991", held for 11.930257119s
	I1124 14:22:09.259332  213874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-626991
	I1124 14:22:09.286515  213874 ssh_runner.go:195] Run: cat /version.json
	I1124 14:22:09.286566  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:09.286804  213874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:22:09.286883  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:09.317344  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:09.323555  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:09.443055  213874 ssh_runner.go:195] Run: systemctl --version
	I1124 14:22:09.546776  213874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:22:09.613883  213874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:22:09.621460  213874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:22:09.621529  213874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:22:09.670237  213874 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:22:09.670256  213874 start.go:496] detecting cgroup driver to use...
	I1124 14:22:09.670287  213874 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:22:09.670346  213874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:22:09.702072  213874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:22:09.718703  213874 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:22:09.718818  213874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:22:09.745999  213874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:22:09.774558  213874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:22:09.984371  213874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:22:10.200153  213874 docker.go:234] disabling docker service ...
	I1124 14:22:10.200264  213874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:22:10.236891  213874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:22:10.258029  213874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:22:10.424478  213874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:22:10.611458  213874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:22:10.637245  213874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:22:10.670715  213874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:22:10.670831  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.683752  213874 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:22:10.683874  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.694831  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.704498  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.718447  213874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:22:10.731217  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.741600  213874 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.755147  213874 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:22:10.769996  213874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:22:10.780550  213874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:22:10.788750  213874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:10.997921  213874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:22:11.223991  213874 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:22:11.224111  213874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:22:11.227951  213874 start.go:564] Will wait 60s for crictl version
	I1124 14:22:11.228071  213874 ssh_runner.go:195] Run: which crictl
	I1124 14:22:11.235101  213874 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:22:11.282436  213874 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:22:11.282580  213874 ssh_runner.go:195] Run: crio --version
	I1124 14:22:11.331860  213874 ssh_runner.go:195] Run: crio --version
	I1124 14:22:11.373566  213874 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:22:11.375056  213874 cli_runner.go:164] Run: docker network inspect auto-626991 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:22:11.398636  213874 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:22:11.402709  213874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:22:11.413475  213874 kubeadm.go:884] updating cluster {Name:auto-626991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-626991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:22:11.413604  213874 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:22:11.413660  213874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:22:11.463495  213874 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:22:11.463517  213874 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:22:11.463572  213874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:22:11.504077  213874 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:22:11.504096  213874 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:22:11.504103  213874 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 14:22:11.504189  213874 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-626991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-626991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:22:11.504262  213874 ssh_runner.go:195] Run: crio config
	I1124 14:22:11.621271  213874 cni.go:84] Creating CNI manager for ""
	I1124 14:22:11.621419  213874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:22:11.621453  213874 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:22:11.621504  213874 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-626991 NodeName:auto-626991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:22:11.621667  213874 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-626991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:22:11.621773  213874 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:22:11.633665  213874 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:22:11.633792  213874 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:22:11.643525  213874 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1124 14:22:11.672517  213874 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:22:11.698704  213874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1124 14:22:11.712462  213874 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:22:11.716863  213874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:22:11.727305  213874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:11.944624  213874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:22:11.981934  213874 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991 for IP: 192.168.85.2
	I1124 14:22:11.982012  213874 certs.go:195] generating shared ca certs ...
	I1124 14:22:11.982051  213874 certs.go:227] acquiring lock for ca certs: {Name:mk5b88bcf3bee8e73291a2c9c79f99bafa2afa7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:11.982285  213874 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key
	I1124 14:22:11.982385  213874 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key
	I1124 14:22:11.982432  213874 certs.go:257] generating profile certs ...
	I1124 14:22:11.982541  213874 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.key
	I1124 14:22:11.982598  213874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt with IP's: []
	I1124 14:22:12.375789  213874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt ...
	I1124 14:22:12.375874  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: {Name:mk2b6b425e1346bd7b8911f945f39b4335ed2ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.376109  213874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.key ...
	I1124 14:22:12.376144  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.key: {Name:mk9ef474e0734f3338e26c65a52f2e13d7ce4704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.376288  213874 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key.1f66f160
	I1124 14:22:12.376328  213874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt.1f66f160 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 14:22:12.489632  213874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt.1f66f160 ...
	I1124 14:22:12.489715  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt.1f66f160: {Name:mkdc5fc0585688e9c8ae9ddca28ca169dcf9d013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.489922  213874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key.1f66f160 ...
	I1124 14:22:12.489968  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key.1f66f160: {Name:mk614bf7ba6042e262731e77a7b6aba451f3fad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.490099  213874 certs.go:382] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt.1f66f160 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt
	I1124 14:22:12.490226  213874 certs.go:386] copying /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key.1f66f160 -> /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key
	I1124 14:22:12.490324  213874 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.key
	I1124 14:22:12.490364  213874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.crt with IP's: []
	I1124 14:22:12.930229  213874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.crt ...
	I1124 14:22:12.930256  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.crt: {Name:mkd7ff303b0b57ba65b0c2c43834e68800ab93f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.930426  213874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.key ...
	I1124 14:22:12.930433  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.key: {Name:mk154aa7a9ce009650b7dc1e9d5aae783827da18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:12.930605  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem (1338 bytes)
	W1124 14:22:12.930641  213874 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611_empty.pem, impossibly tiny 0 bytes
	I1124 14:22:12.930649  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:22:12.930688  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:22:12.930713  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:22:12.930739  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/certs/key.pem (1675 bytes)
	I1124 14:22:12.930784  213874 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem (1708 bytes)
	I1124 14:22:12.931344  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:22:12.977400  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1124 14:22:13.105341  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:22:13.134805  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:22:13.156738  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1124 14:22:13.178510  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:22:13.199576  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:22:13.228465  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:22:13.264130  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/certs/4611.pem --> /usr/share/ca-certificates/4611.pem (1338 bytes)
	I1124 14:22:13.296732  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/ssl/certs/46112.pem --> /usr/share/ca-certificates/46112.pem (1708 bytes)
	I1124 14:22:13.326437  213874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2805/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:22:13.355785  213874 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:22:13.374892  213874 ssh_runner.go:195] Run: openssl version
	I1124 14:22:13.386626  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:22:13.400498  213874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:13.411559  213874 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:13.411725  213874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:22:13.494822  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:22:13.506215  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4611.pem && ln -fs /usr/share/ca-certificates/4611.pem /etc/ssl/certs/4611.pem"
	I1124 14:22:13.520108  213874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4611.pem
	I1124 14:22:13.524962  213874 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4611.pem
	I1124 14:22:13.525025  213874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4611.pem
	I1124 14:22:13.577238  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4611.pem /etc/ssl/certs/51391683.0"
	I1124 14:22:13.593736  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46112.pem && ln -fs /usr/share/ca-certificates/46112.pem /etc/ssl/certs/46112.pem"
	I1124 14:22:13.603002  213874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46112.pem
	I1124 14:22:13.609518  213874 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/46112.pem
	I1124 14:22:13.609584  213874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46112.pem
	I1124 14:22:13.670900  213874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46112.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:22:13.682074  213874 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:22:13.687746  213874 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:22:13.687891  213874 kubeadm.go:401] StartCluster: {Name:auto-626991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-626991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:22:13.687988  213874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:22:13.688073  213874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:22:13.733288  213874 cri.go:89] found id: ""
	I1124 14:22:13.733407  213874 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:22:13.754603  213874 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:22:13.772085  213874 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:22:13.772208  213874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:22:13.790351  213874 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:22:13.790418  213874 kubeadm.go:158] found existing configuration files:
	
	I1124 14:22:13.790494  213874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:22:13.807100  213874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:22:13.807212  213874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:22:13.823285  213874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:22:13.844006  213874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:22:13.844149  213874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:22:13.866170  213874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:22:13.878244  213874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:22:13.878362  213874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:22:13.890208  213874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:22:13.899774  213874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:22:13.899887  213874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:22:13.911670  213874 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:22:13.975089  213874 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:22:13.975333  213874 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:22:14.020093  213874 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:22:14.020257  213874 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:22:14.020335  213874 kubeadm.go:319] OS: Linux
	I1124 14:22:14.020407  213874 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:22:14.020495  213874 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:22:14.020601  213874 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:22:14.020684  213874 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:22:14.020764  213874 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:22:14.020849  213874 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:22:14.020922  213874 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:22:14.021004  213874 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:22:14.021081  213874 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:22:14.152076  213874 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:22:14.152245  213874 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:22:14.152369  213874 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:22:14.164269  213874 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:22:14.728464  212938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.907041262s)
	I1124 14:22:14.728523  212938 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.839836794s)
	I1124 14:22:14.728543  212938 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-152851" to be "Ready" ...
	I1124 14:22:14.728849  212938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.805868009s)
	I1124 14:22:14.781587  212938 node_ready.go:49] node "default-k8s-diff-port-152851" is "Ready"
	I1124 14:22:14.781669  212938 node_ready.go:38] duration metric: took 53.105039ms for node "default-k8s-diff-port-152851" to be "Ready" ...
	I1124 14:22:14.781699  212938 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:22:14.781787  212938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:22:15.045672  212938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.742937231s)
	I1124 14:22:15.045892  212938 api_server.go:72] duration metric: took 9.535272742s to wait for apiserver process to appear ...
	I1124 14:22:15.045947  212938 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:22:15.045984  212938 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 14:22:15.048613  212938 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-152851 addons enable metrics-server
	
	I1124 14:22:15.051644  212938 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 14:22:14.169516  213874 out.go:252]   - Generating certificates and keys ...
	I1124 14:22:14.169676  213874 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:22:14.169795  213874 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:22:14.585454  213874 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:22:15.355462  213874 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:22:16.102126  213874 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:22:16.826876  213874 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:22:17.040300  213874 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:22:17.041012  213874 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-626991 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:22:15.054545  212938 addons.go:530] duration metric: took 9.545086561s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 14:22:15.059246  212938 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:22:15.059275  212938 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:22:15.547073  212938 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 14:22:15.565525  212938 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 14:22:15.566665  212938 api_server.go:141] control plane version: v1.34.1
	I1124 14:22:15.566721  212938 api_server.go:131] duration metric: took 520.749055ms to wait for apiserver health ...
	I1124 14:22:15.566747  212938 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:22:15.575018  212938 system_pods.go:59] 8 kube-system pods found
	I1124 14:22:15.575098  212938 system_pods.go:61] "coredns-66bc5c9577-qnfqn" [386494d3-c6d0-46da-898f-5936bcc3bb40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:22:15.575143  212938 system_pods.go:61] "etcd-default-k8s-diff-port-152851" [73849492-289b-4e8a-b132-076ac817ec77] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:22:15.575174  212938 system_pods.go:61] "kindnet-4j292" [b23f3231-3c24-4e8a-bb05-74e475601643] Running
	I1124 14:22:15.575205  212938 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-152851" [43967435-4b2a-4555-879f-03c39fe3874a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:22:15.575234  212938 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-152851" [088e57fd-b148-4545-93c8-e115d7ce1c9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:22:15.575258  212938 system_pods.go:61] "kube-proxy-m92jb" [118788fe-af1a-46f0-8ff3-7c4a381d36fd] Running
	I1124 14:22:15.575286  212938 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-152851" [59ad535f-afc5-418d-af06-b88121856fc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:22:15.575318  212938 system_pods.go:61] "storage-provisioner" [21b060b9-5567-4a41-8e79-351855fb6f30] Running
	I1124 14:22:15.575339  212938 system_pods.go:74] duration metric: took 8.568163ms to wait for pod list to return data ...
	I1124 14:22:15.575380  212938 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:22:15.582403  212938 default_sa.go:45] found service account: "default"
	I1124 14:22:15.582471  212938 default_sa.go:55] duration metric: took 7.070103ms for default service account to be created ...
	I1124 14:22:15.582503  212938 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:22:15.592894  212938 system_pods.go:86] 8 kube-system pods found
	I1124 14:22:15.592986  212938 system_pods.go:89] "coredns-66bc5c9577-qnfqn" [386494d3-c6d0-46da-898f-5936bcc3bb40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:22:15.593011  212938 system_pods.go:89] "etcd-default-k8s-diff-port-152851" [73849492-289b-4e8a-b132-076ac817ec77] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:22:15.593042  212938 system_pods.go:89] "kindnet-4j292" [b23f3231-3c24-4e8a-bb05-74e475601643] Running
	I1124 14:22:15.593069  212938 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-152851" [43967435-4b2a-4555-879f-03c39fe3874a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:22:15.593097  212938 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-152851" [088e57fd-b148-4545-93c8-e115d7ce1c9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:22:15.593128  212938 system_pods.go:89] "kube-proxy-m92jb" [118788fe-af1a-46f0-8ff3-7c4a381d36fd] Running
	I1124 14:22:15.593157  212938 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-152851" [59ad535f-afc5-418d-af06-b88121856fc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:22:15.593176  212938 system_pods.go:89] "storage-provisioner" [21b060b9-5567-4a41-8e79-351855fb6f30] Running
	I1124 14:22:15.593206  212938 system_pods.go:126] duration metric: took 10.680693ms to wait for k8s-apps to be running ...
	I1124 14:22:15.593235  212938 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:22:15.593309  212938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:22:15.633479  212938 system_svc.go:56] duration metric: took 40.234785ms WaitForService to wait for kubelet
	I1124 14:22:15.633546  212938 kubeadm.go:587] duration metric: took 10.122926814s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:22:15.633581  212938 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:22:15.640437  212938 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:22:15.640513  212938 node_conditions.go:123] node cpu capacity is 2
	I1124 14:22:15.640544  212938 node_conditions.go:105] duration metric: took 6.939599ms to run NodePressure ...
	I1124 14:22:15.640580  212938 start.go:242] waiting for startup goroutines ...
	I1124 14:22:15.640618  212938 start.go:247] waiting for cluster config update ...
	I1124 14:22:15.640645  212938 start.go:256] writing updated cluster config ...
	I1124 14:22:15.640957  212938 ssh_runner.go:195] Run: rm -f paused
	I1124 14:22:15.644922  212938 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:22:15.651193  212938 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qnfqn" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 14:22:17.738542  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	I1124 14:22:17.348726  213874 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:22:17.349080  213874 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-626991 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:22:17.470061  213874 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:22:18.492660  213874 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:22:19.386916  213874 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:22:19.386994  213874 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:22:19.713376  213874 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:22:20.336514  213874 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:22:20.871769  213874 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:22:22.200250  213874 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:22:23.197598  213874 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:22:23.197695  213874 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:22:23.206641  213874 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1124 14:22:20.161246  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:22.674838  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	I1124 14:22:23.211933  213874 out.go:252]   - Booting up control plane ...
	I1124 14:22:23.212053  213874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:22:23.212137  213874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:22:23.212217  213874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:22:23.229585  213874 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:22:23.229693  213874 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:22:23.239109  213874 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:22:23.239969  213874 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:22:23.240261  213874 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:22:23.418534  213874 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:22:23.418657  213874 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:22:25.944899  213874 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.526475711s
	I1124 14:22:25.950776  213874 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:22:25.950872  213874 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 14:22:25.951187  213874 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:22:25.951281  213874 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1124 14:22:25.160237  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:27.656850  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:29.659674  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:32.157745  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:34.159815  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	I1124 14:22:32.159474  213874 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.206603012s
	I1124 14:22:32.953522  213874 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002663639s
	I1124 14:22:34.699347  213874 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.748536308s
	I1124 14:22:34.758204  213874 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:22:34.779011  213874 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:22:34.794805  213874 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:22:34.795009  213874 kubeadm.go:319] [mark-control-plane] Marking the node auto-626991 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:22:34.808076  213874 kubeadm.go:319] [bootstrap-token] Using token: bh15hk.2agovimrivvussod
	I1124 14:22:34.811303  213874 out.go:252]   - Configuring RBAC rules ...
	I1124 14:22:34.811455  213874 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:22:34.820379  213874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:22:34.830071  213874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:22:34.836887  213874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:22:34.842064  213874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:22:34.847216  213874 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:22:35.109802  213874 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:22:35.540842  213874 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:22:36.106833  213874 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:22:36.108481  213874 kubeadm.go:319] 
	I1124 14:22:36.108562  213874 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:22:36.108572  213874 kubeadm.go:319] 
	I1124 14:22:36.108703  213874 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:22:36.108714  213874 kubeadm.go:319] 
	I1124 14:22:36.108739  213874 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:22:36.108803  213874 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:22:36.108864  213874 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:22:36.108876  213874 kubeadm.go:319] 
	I1124 14:22:36.108937  213874 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:22:36.108946  213874 kubeadm.go:319] 
	I1124 14:22:36.108994  213874 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:22:36.109003  213874 kubeadm.go:319] 
	I1124 14:22:36.109064  213874 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:22:36.109152  213874 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:22:36.109225  213874 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:22:36.109234  213874 kubeadm.go:319] 
	I1124 14:22:36.109326  213874 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:22:36.109411  213874 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:22:36.109421  213874 kubeadm.go:319] 
	I1124 14:22:36.109505  213874 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bh15hk.2agovimrivvussod \
	I1124 14:22:36.109615  213874 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f \
	I1124 14:22:36.109641  213874 kubeadm.go:319] 	--control-plane 
	I1124 14:22:36.109651  213874 kubeadm.go:319] 
	I1124 14:22:36.109736  213874 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:22:36.109745  213874 kubeadm.go:319] 
	I1124 14:22:36.109827  213874 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bh15hk.2agovimrivvussod \
	I1124 14:22:36.109933  213874 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:37f0f49cec723293ddb4e564b6685275917c85627d2c55051ccb0f083d16274f 
	I1124 14:22:36.114992  213874 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:22:36.115210  213874 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:22:36.115317  213874 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:22:36.115339  213874 cni.go:84] Creating CNI manager for ""
	I1124 14:22:36.115347  213874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:22:36.118674  213874 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:22:36.121678  213874 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:22:36.126014  213874 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:22:36.126037  213874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:22:36.145054  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:22:36.544867  213874 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:22:36.545029  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-626991 minikube.k8s.io/updated_at=2025_11_24T14_22_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=auto-626991 minikube.k8s.io/primary=true
	I1124 14:22:36.545034  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:36.760135  213874 ops.go:34] apiserver oom_adj: -16
	I1124 14:22:36.760264  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1124 14:22:36.657134  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:38.657803  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	I1124 14:22:37.260584  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:37.761145  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:38.261048  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:38.760766  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:39.261209  213874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:22:39.396092  213874 kubeadm.go:1114] duration metric: took 2.851139361s to wait for elevateKubeSystemPrivileges
	I1124 14:22:39.396124  213874 kubeadm.go:403] duration metric: took 25.708234996s to StartCluster
	I1124 14:22:39.396140  213874 settings.go:142] acquiring lock: {Name:mk89c1ba43c874315f683e1eb3a8f5ff3817a931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:39.396199  213874 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:22:39.397136  213874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/kubeconfig: {Name:mk95d10d27091d631e85a5a3c35d5e4e38630871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:22:39.397370  213874 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:22:39.397488  213874 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:22:39.397753  213874 config.go:182] Loaded profile config "auto-626991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:22:39.397731  213874 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:22:39.397815  213874 addons.go:70] Setting storage-provisioner=true in profile "auto-626991"
	I1124 14:22:39.397852  213874 addons.go:239] Setting addon storage-provisioner=true in "auto-626991"
	I1124 14:22:39.397878  213874 host.go:66] Checking if "auto-626991" exists ...
	I1124 14:22:39.398349  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:39.398601  213874 addons.go:70] Setting default-storageclass=true in profile "auto-626991"
	I1124 14:22:39.398617  213874 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-626991"
	I1124 14:22:39.398870  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:39.402327  213874 out.go:179] * Verifying Kubernetes components...
	I1124 14:22:39.406749  213874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:22:39.438069  213874 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:22:39.439647  213874 addons.go:239] Setting addon default-storageclass=true in "auto-626991"
	I1124 14:22:39.439762  213874 host.go:66] Checking if "auto-626991" exists ...
	I1124 14:22:39.442381  213874 cli_runner.go:164] Run: docker container inspect auto-626991 --format={{.State.Status}}
	I1124 14:22:39.444890  213874 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:22:39.444911  213874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:22:39.444967  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:39.480938  213874 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:22:39.480958  213874 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:22:39.481020  213874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-626991
	I1124 14:22:39.495223  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:39.516541  213874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/auto-626991/id_rsa Username:docker}
	I1124 14:22:39.776866  213874 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:22:39.815014  213874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:22:39.957921  213874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:22:40.068815  213874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:22:40.284494  213874 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 14:22:40.287137  213874 node_ready.go:35] waiting up to 15m0s for node "auto-626991" to be "Ready" ...
	I1124 14:22:40.794767  213874 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-626991" context rescaled to 1 replicas
	I1124 14:22:40.813690  213874 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 14:22:40.816425  213874 addons.go:530] duration metric: took 1.418688646s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1124 14:22:41.157341  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:43.657763  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:42.291942  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:44.790660  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:45.661339  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:48.157220  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	W1124 14:22:47.290190  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:49.790633  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:50.157728  212938 pod_ready.go:104] pod "coredns-66bc5c9577-qnfqn" is not "Ready", error: <nil>
	I1124 14:22:51.657145  212938 pod_ready.go:94] pod "coredns-66bc5c9577-qnfqn" is "Ready"
	I1124 14:22:51.657171  212938 pod_ready.go:86] duration metric: took 36.005907128s for pod "coredns-66bc5c9577-qnfqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.660102  212938 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.664862  212938 pod_ready.go:94] pod "etcd-default-k8s-diff-port-152851" is "Ready"
	I1124 14:22:51.664885  212938 pod_ready.go:86] duration metric: took 4.754715ms for pod "etcd-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.667201  212938 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.671884  212938 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-152851" is "Ready"
	I1124 14:22:51.671911  212938 pod_ready.go:86] duration metric: took 4.686613ms for pod "kube-apiserver-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.674136  212938 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:51.855581  212938 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-152851" is "Ready"
	I1124 14:22:51.855609  212938 pod_ready.go:86] duration metric: took 181.448117ms for pod "kube-controller-manager-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:52.056180  212938 pod_ready.go:83] waiting for pod "kube-proxy-m92jb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:52.454880  212938 pod_ready.go:94] pod "kube-proxy-m92jb" is "Ready"
	I1124 14:22:52.454906  212938 pod_ready.go:86] duration metric: took 398.698546ms for pod "kube-proxy-m92jb" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:52.655170  212938 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:53.055216  212938 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-152851" is "Ready"
	I1124 14:22:53.055242  212938 pod_ready.go:86] duration metric: took 400.041969ms for pod "kube-scheduler-default-k8s-diff-port-152851" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:22:53.055255  212938 pod_ready.go:40] duration metric: took 37.410262957s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:22:53.108086  212938 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:22:53.111472  212938 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-152851" cluster and "default" namespace by default
	W1124 14:22:52.290915  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:54.789948  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:57.290275  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:22:59.292428  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:23:01.790687  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:23:04.293609  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	W1124 14:23:06.295954  213874 node_ready.go:57] node "auto-626991" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.192081757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.201732087Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.202072062Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0fe67197ed092f1e4ddf54ac5e4276588c2664aa6d51812725534928b3e90c99/merged/etc/group: no such file or directory"
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.20320855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.232759187Z" level=info msg="Created container c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9rgw/kubernetes-dashboard" id=f22b27d7-7def-45b7-b9f1-84632e79abdf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.234063208Z" level=info msg="Starting container: c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366" id=030f6d7b-1640-4a4a-9bc7-ada492c7c978 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:22:45 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:45.239815735Z" level=info msg="Started container" PID=1644 containerID=c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9rgw/kubernetes-dashboard id=030f6d7b-1640-4a4a-9bc7-ada492c7c978 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7266de4109737a5005cfa351de750c6e8a6129708fb0519946c624f028985a9
	Nov 24 14:22:46 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:46.035564679Z" level=info msg="Removing container: 3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9" id=5bcd1981-f443-423a-91dc-7c25c0992ea3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:22:46 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:46.043747802Z" level=info msg="Error loading conmon cgroup of container 3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9: cgroup deleted" id=5bcd1981-f443-423a-91dc-7c25c0992ea3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:22:46 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:46.047288171Z" level=info msg="Removed container 3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc/dashboard-metrics-scraper" id=5bcd1981-f443-423a-91dc-7c25c0992ea3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.684639934Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.688449761Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.688482853Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.688502529Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.695842387Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.695875478Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.695898461Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.698913573Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.699073427Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.699119639Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.702355381Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.70238734Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.702413597Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.705506559Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:22:53 default-k8s-diff-port-152851 crio[653]: time="2025-11-24T14:22:53.705538609Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c22e573f34aa0       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   25 seconds ago       Running             kubernetes-dashboard        0                   e7266de410973       kubernetes-dashboard-855c9754f9-n9rgw                  kubernetes-dashboard
	8c79dacfe2ce2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   d2278d8171798       storage-provisioner                                    kube-system
	562bb15e55c2c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago       Exited              dashboard-metrics-scraper   2                   9070a8bad5a5d       dashboard-metrics-scraper-6ffb444bf9-bx8xc             kubernetes-dashboard
	c8989c13e14d0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   5867e364048f7       busybox                                                default
	1c698135a2638       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   f289ff1c5cb02       kube-proxy-m92jb                                       kube-system
	a1ed8677ffbfc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   86bc9657ac448       coredns-66bc5c9577-qnfqn                               kube-system
	39188bea6a564       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   56072b31069d6       kindnet-4j292                                          kube-system
	2989f7e752e95       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   d2278d8171798       storage-provisioner                                    kube-system
	0c3cb362e4d91       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b44afd490cb7e       kube-apiserver-default-k8s-diff-port-152851            kube-system
	79537370e6485       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   0c5530df43674       etcd-default-k8s-diff-port-152851                      kube-system
	1ab258e8f64fc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   9705521b00ad0       kube-scheduler-default-k8s-diff-port-152851            kube-system
	6f961bf321242       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   583d62b5165cb       kube-controller-manager-default-k8s-diff-port-152851   kube-system
	
	
	==> coredns [a1ed8677ffbfcde28812b6af270ab182d73971a25480fded9fd523b9027de0fb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52857 - 64591 "HINFO IN 6377043057621824404.6857678930301169623. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022612221s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-152851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-152851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=default-k8s-diff-port-152851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_20_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:20:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-152851
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:23:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:22:42 +0000   Mon, 24 Nov 2025 14:20:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:22:42 +0000   Mon, 24 Nov 2025 14:20:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:22:42 +0000   Mon, 24 Nov 2025 14:20:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:22:42 +0000   Mon, 24 Nov 2025 14:21:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-152851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                854b5bec-4224-4750-be80-397681d0c7d0
	  Boot ID:                    1b5f797b-5607-4a65-8de2-379783b7e272
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-qnfqn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m27s
	  kube-system                 etcd-default-k8s-diff-port-152851                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m32s
	  kube-system                 kindnet-4j292                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-152851             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-152851    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-m92jb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-152851             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bx8xc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-n9rgw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m26s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   Starting                 2m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m32s                  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s                  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m32s                  kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m28s                  node-controller  Node default-k8s-diff-port-152851 event: Registered Node default-k8s-diff-port-152851 in Controller
	  Normal   NodeReady                106s                   kubelet          Node default-k8s-diff-port-152851 status is now: NodeReady
	  Normal   Starting                 66s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-152851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node default-k8s-diff-port-152851 event: Registered Node default-k8s-diff-port-152851 in Controller
	
	
	==> dmesg <==
	[ +47.364934] overlayfs: idmapped layers are currently not supported
	[Nov24 13:59] overlayfs: idmapped layers are currently not supported
	[Nov24 14:00] overlayfs: idmapped layers are currently not supported
	[ +26.972375] overlayfs: idmapped layers are currently not supported
	[Nov24 14:02] overlayfs: idmapped layers are currently not supported
	[Nov24 14:03] overlayfs: idmapped layers are currently not supported
	[Nov24 14:05] overlayfs: idmapped layers are currently not supported
	[Nov24 14:07] overlayfs: idmapped layers are currently not supported
	[ +22.741489] overlayfs: idmapped layers are currently not supported
	[Nov24 14:11] overlayfs: idmapped layers are currently not supported
	[Nov24 14:13] overlayfs: idmapped layers are currently not supported
	[ +29.661409] overlayfs: idmapped layers are currently not supported
	[ +14.398898] overlayfs: idmapped layers are currently not supported
	[Nov24 14:14] overlayfs: idmapped layers are currently not supported
	[ +36.148198] overlayfs: idmapped layers are currently not supported
	[Nov24 14:16] overlayfs: idmapped layers are currently not supported
	[Nov24 14:17] overlayfs: idmapped layers are currently not supported
	[Nov24 14:18] overlayfs: idmapped layers are currently not supported
	[ +49.916713] overlayfs: idmapped layers are currently not supported
	[Nov24 14:19] overlayfs: idmapped layers are currently not supported
	[Nov24 14:20] overlayfs: idmapped layers are currently not supported
	[Nov24 14:21] overlayfs: idmapped layers are currently not supported
	[ +26.692408] overlayfs: idmapped layers are currently not supported
	[Nov24 14:22] overlayfs: idmapped layers are currently not supported
	[ +21.257761] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [79537370e6485bd82564920e391bc4bdfa906e6f8c7d96a71aac5f90ea93fca2] <==
	{"level":"warn","ts":"2025-11-24T14:22:09.240845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.270420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.308977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.346935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.391641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.421443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.465728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.491719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.532801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.541901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.592849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.630641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.646491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.678143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.729972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.849414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.898432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:09.973491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:10.015642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:10.172085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:22:13.117790Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.494448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-152851\" limit:1 ","response":"range_response_count:1 size:7752"}
	{"level":"info","ts":"2025-11-24T14:22:13.117847Z","caller":"traceutil/trace.go:172","msg":"trace[1151595630] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-152851; range_end:; response_count:1; response_revision:523; }","duration":"111.56206ms","start":"2025-11-24T14:22:13.006273Z","end":"2025-11-24T14:22:13.117835Z","steps":["trace[1151595630] 'agreement among raft nodes before linearized reading'  (duration: 92.729636ms)","trace[1151595630] 'range keys from in-memory index tree'  (duration: 18.68139ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T14:22:13.118047Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.726828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient\" limit:1 ","response":"range_response_count:1 size:718"}
	{"level":"info","ts":"2025-11-24T14:22:13.118066Z","caller":"traceutil/trace.go:172","msg":"trace[1048545038] range","detail":"{range_begin:/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient; range_end:; response_count:1; response_revision:524; }","duration":"111.748826ms","start":"2025-11-24T14:22:13.006312Z","end":"2025-11-24T14:22:13.118061Z","steps":["trace[1048545038] 'agreement among raft nodes before linearized reading'  (duration: 111.682364ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T14:22:13.118163Z","caller":"traceutil/trace.go:172","msg":"trace[1234685541] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"129.814916ms","start":"2025-11-24T14:22:12.988340Z","end":"2025-11-24T14:22:13.118155Z","steps":["trace[1234685541] 'process raft request'  (duration: 110.724799ms)","trace[1234685541] 'compare'  (duration: 18.367804ms)"],"step_count":2}
	
	
	==> kernel <==
	 14:23:11 up  2:05,  0 user,  load average: 4.34, 3.47, 2.78
	Linux default-k8s-diff-port-152851 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [39188bea6a564382a6980903b479f266ffccf33b661be945f494d30c4d35a2a1] <==
	I1124 14:22:13.492845       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:22:13.493069       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:22:13.493190       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:22:13.493202       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:22:13.493212       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:22:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:22:13.709192       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:22:13.709220       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:22:13.709230       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:22:13.709533       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:22:43.684531       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:22:43.709793       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:22:43.709890       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:22:43.709991       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:22:45.113176       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:22:45.113215       1 metrics.go:72] Registering metrics
	I1124 14:22:45.113274       1 controller.go:711] "Syncing nftables rules"
	I1124 14:22:53.684123       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:22:53.684343       1 main.go:301] handling current node
	I1124 14:23:03.692846       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:23:03.692884       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0c3cb362e4d9189052f992f04fe50fac0c17ff2bd5f72ef4be40e433331ba291] <==
	I1124 14:22:11.751191       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:22:11.751227       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:22:11.765960       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:22:11.766034       1 aggregator.go:171] initial CRD sync complete...
	I1124 14:22:11.766042       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 14:22:11.766049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:22:11.766055       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:22:11.773809       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:22:11.773853       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 14:22:11.774808       1 cache.go:39] Caches are synced for LocalAvailability controller
	E1124 14:22:11.796848       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:22:11.844550       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 14:22:11.844601       1 policy_source.go:240] refreshing policies
	I1124 14:22:11.875506       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:22:12.383199       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:22:12.677958       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:22:14.036508       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:22:14.545811       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:22:14.676397       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:22:14.773519       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:22:14.992561       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.140.40"}
	I1124 14:22:15.038841       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.218.156"}
	I1124 14:22:17.151588       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:22:17.451117       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:22:17.503656       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6f961bf32124218ddecf96482422e8b5741e2a3a4f6241f531f98636f588acab] <==
	I1124 14:22:17.004155       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 14:22:17.004239       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 14:22:17.004312       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:22:17.004369       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:22:17.004398       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:22:17.009147       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 14:22:17.012542       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 14:22:17.015104       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 14:22:17.015312       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:22:17.015534       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-152851"
	I1124 14:22:17.015658       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:22:17.019894       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 14:22:17.022434       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:22:17.024102       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:22:17.028942       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:22:17.030377       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:22:17.030473       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:22:17.030507       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:22:17.036929       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:22:17.042573       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:22:17.045806       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:22:17.045924       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:22:17.045938       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:22:17.048942       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:22:17.055162       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [1c698135a263826ddf532798c50e5a822a0a3c1879d5551637c0335e965e578a] <==
	I1124 14:22:15.254032       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:22:15.333143       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:22:15.444386       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:22:15.444501       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:22:15.444627       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:22:15.470653       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:22:15.470769       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:22:15.474710       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:22:15.475109       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:22:15.475288       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:22:15.476690       1 config.go:200] "Starting service config controller"
	I1124 14:22:15.476751       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:22:15.476811       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:22:15.476838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:22:15.476874       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:22:15.476936       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:22:15.477633       1 config.go:309] "Starting node config controller"
	I1124 14:22:15.480146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:22:15.480210       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:22:15.579455       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:22:15.600798       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:22:15.600830       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1ab258e8f64fcb7b1fb7769531303c2b402470d9e0ae16ffe932a98857f4fd05] <==
	I1124 14:22:10.587447       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:22:15.166185       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:22:15.166295       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:22:15.172176       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:22:15.172368       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:22:15.172428       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:22:15.172479       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:22:15.180805       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:22:15.187442       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:22:15.181030       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:22:15.187605       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:22:15.272602       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 14:22:15.288049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:22:15.288124       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:22:17 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:17.662603     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/81506cf9-bce8-4955-8685-686c2fe938fb-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-n9rgw\" (UID: \"81506cf9-bce8-4955-8685-686c2fe938fb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9rgw"
	Nov 24 14:22:17 default-k8s-diff-port-152851 kubelet[782]: W1124 14:22:17.981018     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/crio-9070a8bad5a5d83c701317ea5de703851d0e05c2181f492428b4007d48740164 WatchSource:0}: Error finding container 9070a8bad5a5d83c701317ea5de703851d0e05c2181f492428b4007d48740164: Status 404 returned error can't find the container with id 9070a8bad5a5d83c701317ea5de703851d0e05c2181f492428b4007d48740164
	Nov 24 14:22:18 default-k8s-diff-port-152851 kubelet[782]: W1124 14:22:18.001880     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bb380e4fa749c80f5c1b19c95fcad1ed1f835691b8483c3ebe22f8e66973175a/crio-e7266de4109737a5005cfa351de750c6e8a6129708fb0519946c624f028985a9 WatchSource:0}: Error finding container e7266de4109737a5005cfa351de750c6e8a6129708fb0519946c624f028985a9: Status 404 returned error can't find the container with id e7266de4109737a5005cfa351de750c6e8a6129708fb0519946c624f028985a9
	Nov 24 14:22:21 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:21.292265     782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 14:22:24 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:24.933985     782 scope.go:117] "RemoveContainer" containerID="946383ed71ec7606ee61d3904da95ae5f85dfda4dd465a11f11d07701cdc6ebe"
	Nov 24 14:22:25 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:25.938940     782 scope.go:117] "RemoveContainer" containerID="946383ed71ec7606ee61d3904da95ae5f85dfda4dd465a11f11d07701cdc6ebe"
	Nov 24 14:22:25 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:25.940081     782 scope.go:117] "RemoveContainer" containerID="3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9"
	Nov 24 14:22:25 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:25.940251     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:22:26 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:26.944308     782 scope.go:117] "RemoveContainer" containerID="3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9"
	Nov 24 14:22:26 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:26.945045     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:22:27 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:27.946524     782 scope.go:117] "RemoveContainer" containerID="3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9"
	Nov 24 14:22:27 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:27.947139     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:22:43 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:43.692294     782 scope.go:117] "RemoveContainer" containerID="3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9"
	Nov 24 14:22:45 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:45.023220     782 scope.go:117] "RemoveContainer" containerID="2989f7e752e954727b85260ced94cba345baeb3ca207485296c9174eb09dfd54"
	Nov 24 14:22:46 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:46.030718     782 scope.go:117] "RemoveContainer" containerID="3c0fe8ae572d0945f9bb967ad15c8c3823c09a8cc32eac4644fd3265f8329de9"
	Nov 24 14:22:46 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:46.031002     782 scope.go:117] "RemoveContainer" containerID="562bb15e55c2c7efd782aafdd77f42b761f21c9e5ed723b3a3473ff620847201"
	Nov 24 14:22:46 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:46.031175     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:22:46 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:46.101547     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9rgw" podStartSLOduration=1.926383425 podStartE2EDuration="29.101527643s" podCreationTimestamp="2025-11-24 14:22:17 +0000 UTC" firstStartedPulling="2025-11-24 14:22:18.006271969 +0000 UTC m=+13.782792411" lastFinishedPulling="2025-11-24 14:22:45.181416179 +0000 UTC m=+40.957936629" observedRunningTime="2025-11-24 14:22:46.071738579 +0000 UTC m=+41.848259029" watchObservedRunningTime="2025-11-24 14:22:46.101527643 +0000 UTC m=+41.878048085"
	Nov 24 14:22:47 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:47.923113     782 scope.go:117] "RemoveContainer" containerID="562bb15e55c2c7efd782aafdd77f42b761f21c9e5ed723b3a3473ff620847201"
	Nov 24 14:22:47 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:47.923298     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:22:59 default-k8s-diff-port-152851 kubelet[782]: I1124 14:22:59.691795     782 scope.go:117] "RemoveContainer" containerID="562bb15e55c2c7efd782aafdd77f42b761f21c9e5ed723b3a3473ff620847201"
	Nov 24 14:22:59 default-k8s-diff-port-152851 kubelet[782]: E1124 14:22:59.691987     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bx8xc_kubernetes-dashboard(e8676d02-dd48-4dc0-b25e-bd9c480084bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bx8xc" podUID="e8676d02-dd48-4dc0-b25e-bd9c480084bc"
	Nov 24 14:23:05 default-k8s-diff-port-152851 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:23:05 default-k8s-diff-port-152851 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:23:05 default-k8s-diff-port-152851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c22e573f34aa0e2dc77ca244518275a75fe9d0fc6dea3e2b5672083318574366] <==
	2025/11/24 14:22:45 Using namespace: kubernetes-dashboard
	2025/11/24 14:22:45 Using in-cluster config to connect to apiserver
	2025/11/24 14:22:45 Using secret token for csrf signing
	2025/11/24 14:22:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:22:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:22:45 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 14:22:45 Generating JWE encryption key
	2025/11/24 14:22:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:22:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:22:45 Initializing JWE encryption key from synchronized object
	2025/11/24 14:22:45 Creating in-cluster Sidecar client
	2025/11/24 14:22:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:22:45 Serving insecurely on HTTP port: 9090
	2025/11/24 14:22:45 Starting overwatch
	
	
	==> storage-provisioner [2989f7e752e954727b85260ced94cba345baeb3ca207485296c9174eb09dfd54] <==
	I1124 14:22:14.046706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:22:44.050528       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8c79dacfe2ce221fbea9a2486ec69afabc6d46c3c4d82441e90690036efd9d52] <==
	I1124 14:22:45.179280       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:22:45.199177       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:22:45.199517       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:22:45.207342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:22:48.662677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:22:52.922949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:22:56.521410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:22:59.574728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:02.597429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:02.602975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:23:02.603124       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:23:02.603888       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d5980bf8-00ae-4d19-87f0-18805e995386", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-152851_b86a2b06-7d3b-4b45-b327-820f8093d619 became leader
	I1124 14:23:02.605064       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-152851_b86a2b06-7d3b-4b45-b327-820f8093d619!
	W1124 14:23:02.605181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:02.618435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:23:02.707811       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-152851_b86a2b06-7d3b-4b45-b327-820f8093d619!
	W1124 14:23:04.621846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:04.627062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:06.630787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:06.635278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:08.639861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:08.646477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:10.651531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:23:10.659216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851: exit status 2 (390.897027ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-152851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.14s)
E1124 14:28:53.477688    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:29:00.800326    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:29:05.129951    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:29:12.548930    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (261/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.55
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.11
9 TestDownloadOnly/v1.28.0/DeleteAll 0.26
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.34.1/json-events 10.23
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.2
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 165.83
31 TestAddons/serial/GCPAuth/Namespaces 0.25
32 TestAddons/serial/GCPAuth/FakeCredentials 8.85
48 TestAddons/StoppedEnableDisable 12.59
49 TestCertOptions 38.52
50 TestCertExpiration 257.03
52 TestForceSystemdFlag 41.08
53 TestForceSystemdEnv 43.3
58 TestErrorSpam/setup 31.54
59 TestErrorSpam/start 0.78
60 TestErrorSpam/status 1.13
61 TestErrorSpam/pause 6.49
62 TestErrorSpam/unpause 5.54
63 TestErrorSpam/stop 1.54
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 80.24
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.53
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.43
75 TestFunctional/serial/CacheCmd/cache/add_local 1.04
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 38.65
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.47
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 4.38
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 9.29
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.07
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 24.91
101 TestFunctional/parallel/SSHCmd 0.69
102 TestFunctional/parallel/CpCmd 1.98
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.72
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.95
113 TestFunctional/parallel/License 2.47
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
119 TestFunctional/parallel/Version/short 0.09
120 TestFunctional/parallel/Version/components 1.12
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
125 TestFunctional/parallel/ImageCommands/ImageBuild 4.56
126 TestFunctional/parallel/ImageCommands/Setup 0.66
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/MountCmd/any-port 7.57
144 TestFunctional/parallel/MountCmd/specific-port 2.34
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.77
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
148 TestFunctional/parallel/ProfileCmd/profile_list 0.42
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
150 TestFunctional/parallel/ServiceCmd/List 1.33
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.37
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 208.82
163 TestMultiControlPlane/serial/DeployApp 7.72
164 TestMultiControlPlane/serial/PingHostFromPods 1.45
165 TestMultiControlPlane/serial/AddWorkerNode 59.47
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.06
168 TestMultiControlPlane/serial/CopyFile 20.05
169 TestMultiControlPlane/serial/StopSecondaryNode 13.4
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
171 TestMultiControlPlane/serial/RestartSecondaryNode 31.43
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.48
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 134.26
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.34
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
176 TestMultiControlPlane/serial/StopCluster 36.07
177 TestMultiControlPlane/serial/RestartCluster 67.5
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.85
179 TestMultiControlPlane/serial/AddSecondaryNode 93.58
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
185 TestJSONOutput/start/Command 81.7
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.85
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 69.63
211 TestKicCustomNetwork/use_default_bridge_network 33.76
212 TestKicExistingNetwork 34.32
213 TestKicCustomSubnet 37.92
214 TestKicStaticIP 36.46
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 72.44
219 TestMountStart/serial/StartWithMountFirst 8.67
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.91
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 8
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 135.14
231 TestMultiNode/serial/DeployApp2Nodes 4.93
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 58.36
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.56
237 TestMultiNode/serial/StopNode 2.46
238 TestMultiNode/serial/StartAfterStop 8.05
239 TestMultiNode/serial/RestartKeepsNodes 72.25
240 TestMultiNode/serial/DeleteNode 5.65
241 TestMultiNode/serial/StopMultiNode 23.96
242 TestMultiNode/serial/RestartMultiNode 50.23
243 TestMultiNode/serial/ValidateNameConflict 39.42
248 TestPreload 160.39
250 TestScheduledStopUnix 110.28
253 TestInsufficientStorage 13.47
254 TestRunningBinaryUpgrade 61.22
256 TestKubernetesUpgrade 349.18
257 TestMissingContainerUpgrade 105.41
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 49.07
261 TestNoKubernetes/serial/StartWithStopK8s 113.51
262 TestNoKubernetes/serial/Start 7.74
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
265 TestNoKubernetes/serial/ProfileList 33.4
266 TestNoKubernetes/serial/Stop 1.29
267 TestNoKubernetes/serial/StartNoArgs 9.02
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
269 TestStoppedBinaryUpgrade/Setup 8.09
270 TestStoppedBinaryUpgrade/Upgrade 53.48
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
280 TestPause/serial/Start 84.09
281 TestPause/serial/SecondStartNoReconfiguration 31.84
290 TestNetworkPlugins/group/false 4.93
295 TestStartStop/group/old-k8s-version/serial/FirstStart 61.01
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
298 TestStartStop/group/old-k8s-version/serial/Stop 12.14
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/old-k8s-version/serial/SecondStart 47.88
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
306 TestStartStop/group/no-preload/serial/FirstStart 78.3
308 TestStartStop/group/embed-certs/serial/FirstStart 89.32
309 TestStartStop/group/no-preload/serial/DeployApp 10.31
311 TestStartStop/group/no-preload/serial/Stop 12.03
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
313 TestStartStop/group/no-preload/serial/SecondStart 48.83
314 TestStartStop/group/embed-certs/serial/DeployApp 10.43
316 TestStartStop/group/embed-certs/serial/Stop 12
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
318 TestStartStop/group/embed-certs/serial/SecondStart 58.11
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.13
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.27
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
330 TestStartStop/group/newest-cni/serial/FirstStart 35.27
331 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.45
334 TestStartStop/group/newest-cni/serial/Stop 1.42
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
336 TestStartStop/group/newest-cni/serial/SecondStart 16.17
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.73
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 59.39
345 TestNetworkPlugins/group/auto/Start 86.55
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
350 TestNetworkPlugins/group/kindnet/Start 83.43
351 TestNetworkPlugins/group/auto/KubeletFlags 0.29
352 TestNetworkPlugins/group/auto/NetCatPod 11.27
353 TestNetworkPlugins/group/auto/DNS 0.24
354 TestNetworkPlugins/group/auto/Localhost 0.16
355 TestNetworkPlugins/group/auto/HairPin 0.2
356 TestNetworkPlugins/group/calico/Start 76.36
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
359 TestNetworkPlugins/group/kindnet/NetCatPod 13.44
360 TestNetworkPlugins/group/kindnet/DNS 0.19
361 TestNetworkPlugins/group/kindnet/Localhost 0.14
362 TestNetworkPlugins/group/kindnet/HairPin 0.18
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/custom-flannel/Start 67.57
365 TestNetworkPlugins/group/calico/KubeletFlags 0.36
366 TestNetworkPlugins/group/calico/NetCatPod 12.34
367 TestNetworkPlugins/group/calico/DNS 0.21
368 TestNetworkPlugins/group/calico/Localhost 0.16
369 TestNetworkPlugins/group/calico/HairPin 0.18
370 TestNetworkPlugins/group/enable-default-cni/Start 74.79
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.38
373 TestNetworkPlugins/group/custom-flannel/DNS 0.17
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
376 TestNetworkPlugins/group/flannel/Start 63.68
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.37
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
382 TestNetworkPlugins/group/bridge/Start 80.41
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
385 TestNetworkPlugins/group/flannel/NetCatPod 12.35
386 TestNetworkPlugins/group/flannel/DNS 0.18
387 TestNetworkPlugins/group/flannel/Localhost 0.16
388 TestNetworkPlugins/group/flannel/HairPin 0.13
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
390 TestNetworkPlugins/group/bridge/NetCatPod 10.25
391 TestNetworkPlugins/group/bridge/DNS 0.15
392 TestNetworkPlugins/group/bridge/Localhost 0.13
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (9.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-756094 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-756094 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.550475738s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 13:14:06.277735    4611 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1124 13:14:06.277821    4611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-756094
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-756094: exit status 85 (106.545157ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-756094 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-756094 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:13:56
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:13:56.771686    4617 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:13:56.771877    4617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:56.771907    4617 out.go:374] Setting ErrFile to fd 2...
	I1124 13:13:56.772246    4617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:56.772658    4617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	W1124 13:13:56.772822    4617 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21932-2805/.minikube/config/config.json: open /home/jenkins/minikube-integration/21932-2805/.minikube/config/config.json: no such file or directory
	I1124 13:13:56.773284    4617 out.go:368] Setting JSON to true
	I1124 13:13:56.774055    4617 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3388,"bootTime":1763986649,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 13:13:56.774158    4617 start.go:143] virtualization:  
	I1124 13:13:56.779873    4617 out.go:99] [download-only-756094] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1124 13:13:56.780050    4617 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 13:13:56.780139    4617 notify.go:221] Checking for updates...
	I1124 13:13:56.783571    4617 out.go:171] MINIKUBE_LOCATION=21932
	I1124 13:13:56.787045    4617 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:13:56.790414    4617 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 13:13:56.793610    4617 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 13:13:56.796768    4617 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1124 13:13:56.802872    4617 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 13:13:56.803095    4617 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:13:56.824153    4617 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:13:56.824268    4617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:57.230979    4617 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-24 13:13:57.22175259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:13:57.231083    4617 docker.go:319] overlay module found
	I1124 13:13:57.234361    4617 out.go:99] Using the docker driver based on user configuration
	I1124 13:13:57.234396    4617 start.go:309] selected driver: docker
	I1124 13:13:57.234404    4617 start.go:927] validating driver "docker" against <nil>
	I1124 13:13:57.234511    4617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:57.286548    4617 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-24 13:13:57.277613594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:13:57.286712    4617 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:13:57.287018    4617 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1124 13:13:57.287194    4617 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:13:57.290709    4617 out.go:171] Using Docker driver with root privileges
	I1124 13:13:57.293641    4617 cni.go:84] Creating CNI manager for ""
	I1124 13:13:57.293713    4617 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:13:57.293725    4617 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:13:57.293801    4617 start.go:353] cluster config:
	{Name:download-only-756094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-756094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:13:57.296914    4617 out.go:99] Starting "download-only-756094" primary control-plane node in "download-only-756094" cluster
	I1124 13:13:57.296946    4617 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:13:57.299874    4617 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:13:57.299920    4617 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 13:13:57.300019    4617 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:13:57.318141    4617 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:13:57.318328    4617 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 13:13:57.318441    4617 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:13:57.353155    4617 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1124 13:13:57.353207    4617 cache.go:65] Caching tarball of preloaded images
	I1124 13:13:57.353381    4617 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 13:13:57.356811    4617 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1124 13:13:57.356854    4617 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1124 13:13:57.443275    4617 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1124 13:13:57.443429    4617 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-756094 host does not exist
	  To start a cluster, run: "minikube start -p download-only-756094"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-756094
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (10.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-949930 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-949930 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.225875258s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (10.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1124 13:14:17.086192    4611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1124 13:14:17.086228    4611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-949930
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-949930: exit status 85 (92.592599ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-756094 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-756094 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ delete  │ -p download-only-756094                                                                                                                                                   │ download-only-756094 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ start   │ -o=json --download-only -p download-only-949930 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-949930 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:14:06
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:14:06.905966    4813 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:14:06.906164    4813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:06.906189    4813 out.go:374] Setting ErrFile to fd 2...
	I1124 13:14:06.906208    4813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:06.907496    4813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:14:06.907950    4813 out.go:368] Setting JSON to true
	I1124 13:14:06.908849    4813 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3398,"bootTime":1763986649,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 13:14:06.908941    4813 start.go:143] virtualization:  
	I1124 13:14:06.924897    4813 out.go:99] [download-only-949930] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 13:14:06.925240    4813 notify.go:221] Checking for updates...
	I1124 13:14:06.939953    4813 out.go:171] MINIKUBE_LOCATION=21932
	I1124 13:14:06.964882    4813 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:14:06.989715    4813 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 13:14:07.026178    4813 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 13:14:07.048671    4813 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1124 13:14:07.100406    4813 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 13:14:07.100767    4813 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:14:07.121902    4813 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:14:07.122030    4813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:14:07.196706    4813 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 13:14:07.187466303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:14:07.196803    4813 docker.go:319] overlay module found
	I1124 13:14:07.205978    4813 out.go:99] Using the docker driver based on user configuration
	I1124 13:14:07.206019    4813 start.go:309] selected driver: docker
	I1124 13:14:07.206026    4813 start.go:927] validating driver "docker" against <nil>
	I1124 13:14:07.206144    4813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:14:07.269259    4813 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 13:14:07.260425012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:14:07.269412    4813 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:14:07.269702    4813 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1124 13:14:07.269886    4813 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:14:07.275619    4813 out.go:171] Using Docker driver with root privileges
	I1124 13:14:07.281624    4813 cni.go:84] Creating CNI manager for ""
	I1124 13:14:07.281695    4813 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:14:07.281707    4813 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:14:07.281785    4813 start.go:353] cluster config:
	{Name:download-only-949930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-949930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:14:07.286257    4813 out.go:99] Starting "download-only-949930" primary control-plane node in "download-only-949930" cluster
	I1124 13:14:07.286288    4813 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:14:07.290742    4813 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:14:07.290797    4813 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:07.290852    4813 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:14:07.306972    4813 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:14:07.307105    4813 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 13:14:07.307125    4813 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 13:14:07.307129    4813 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 13:14:07.307136    4813 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 13:14:07.356976    4813 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 13:14:07.357005    4813 cache.go:65] Caching tarball of preloaded images
	I1124 13:14:07.357178    4813 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:07.361773    4813 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1124 13:14:07.361807    4813 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1124 13:14:07.450832    4813 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1124 13:14:07.450882    4813 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21932-2805/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1124 13:14:16.468608    4813 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:14:16.468984    4813 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/download-only-949930/config.json ...
	I1124 13:14:16.469017    4813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/download-only-949930/config.json: {Name:mk5c7d931f87ee5a9d2a61af52d5ede77ea00f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:16.469192    4813 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:16.469396    4813 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21932-2805/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-949930 host does not exist
	  To start a cluster, run: "minikube start -p download-only-949930"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-949930
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 13:14:18.241561    4611 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-147804 --alsologtostderr --binary-mirror http://127.0.0.1:44105 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-147804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-147804
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-647907
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-647907: exit status 85 (66.777009ms)

                                                
                                                
-- stdout --
	* Profile "addons-647907" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-647907"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-647907
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-647907: exit status 85 (79.912901ms)

                                                
                                                
-- stdout --
	* Profile "addons-647907" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-647907"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (165.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-647907 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-647907 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m45.833160018s)
--- PASS: TestAddons/Setup (165.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-647907 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-647907 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-647907 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-647907 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4965dea7-ab1e-4641-94c0-23710f8285e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4965dea7-ab1e-4641-94c0-23710f8285e0] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004229056s
addons_test.go:694: (dbg) Run:  kubectl --context addons-647907 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-647907 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-647907 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-647907 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.59s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-647907
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-647907: (12.30886542s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-647907
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-647907
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-647907
--- PASS: TestAddons/StoppedEnableDisable (12.59s)

                                                
                                    
x
+
TestCertOptions (38.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-097221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1124 14:14:00.799846    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-097221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.66573033s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-097221 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-097221 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-097221 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-097221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-097221
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-097221: (2.080088697s)
--- PASS: TestCertOptions (38.52s)

                                                
                                    
x
+
TestCertExpiration (257.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-032076 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-032076 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.070217538s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-032076 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-032076 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (33.403244208s)
helpers_test.go:175: Cleaning up "cert-expiration-032076" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-032076
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-032076: (3.552797974s)
--- PASS: TestCertExpiration (257.03s)

                                                
                                    
x
+
TestForceSystemdFlag (41.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-928059 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-928059 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.121753727s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-928059 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-928059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-928059
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-928059: (2.596469634s)
--- PASS: TestForceSystemdFlag (41.08s)

                                                
                                    
x
+
TestForceSystemdEnv (43.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-289577 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-289577 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.281159099s)
helpers_test.go:175: Cleaning up "force-systemd-env-289577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-289577
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-289577: (3.016391607s)
--- PASS: TestForceSystemdEnv (43.30s)

                                                
                                    
x
+
TestErrorSpam/setup (31.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-998886 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-998886 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-998886 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-998886 --driver=docker  --container-runtime=crio: (31.541340544s)
--- PASS: TestErrorSpam/setup (31.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (6.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 pause: exit status 80 (2.220804757s)

                                                
                                                
-- stdout --
	* Pausing node nospam-998886 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:21:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 pause: exit status 80 (1.804272647s)

                                                
                                                
-- stdout --
	* Pausing node nospam-998886 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:21:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 pause: exit status 80 (2.463773911s)

                                                
                                                
-- stdout --
	* Pausing node nospam-998886 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:21:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 unpause: exit status 80 (2.084261196s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-998886 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:21:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 unpause: exit status 80 (1.733528556s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-998886 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:21:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 unpause: exit status 80 (1.722643091s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-998886 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:21:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.54s)

                                                
                                    
x
+
TestErrorSpam/stop (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 stop: (1.334457037s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-998886 --log_dir /tmp/nospam-998886 stop
--- PASS: TestErrorSpam/stop (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21932-2805/.minikube/files/etc/test/nested/copy/4611/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471703 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1124 13:22:05.893580    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:05.900112    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:05.912205    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:05.933899    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:05.975615    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:06.057361    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:06.218826    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:06.540443    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:07.182256    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:08.464516    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:11.027393    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:16.149346    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:26.391474    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-471703 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.23976s)
--- PASS: TestFunctional/serial/StartWithProxy (80.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 13:22:36.802608    4611 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471703 --alsologtostderr -v=8
E1124 13:22:46.873310    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-471703 --alsologtostderr -v=8: (29.526377203s)
functional_test.go:678: soft start took 29.530603608s for "functional-471703" cluster.
I1124 13:23:06.329289    4611 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-471703 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-471703 cache add registry.k8s.io/pause:3.1: (1.137796782s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-471703 cache add registry.k8s.io/pause:3.3: (1.19555714s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-471703 cache add registry.k8s.io/pause:latest: (1.100284267s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-471703 /tmp/TestFunctionalserialCacheCmdcacheadd_local3210892777/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 cache add minikube-local-cache-test:functional-471703
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 cache delete minikube-local-cache-test:functional-471703
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-471703
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (314.8263ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 kubectl -- --context functional-471703 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-471703 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.65s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471703 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 13:23:27.834609    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-471703 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.651319024s)
functional_test.go:776: restart took 38.65142241s for "functional-471703" cluster.
I1124 13:23:52.293309    4611 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (38.65s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-471703 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-471703 logs: (1.464809252s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 logs --file /tmp/TestFunctionalserialLogsFileCmd4154430494/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-471703 logs --file /tmp/TestFunctionalserialLogsFileCmd4154430494/001/logs.txt: (1.480054977s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-471703 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-471703
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-471703: exit status 115 (402.98859ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31318 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-471703 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 config get cpus: exit status 14 (65.027245ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 config get cpus: exit status 14 (195.984298ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-471703 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-471703 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 31714: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-471703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (212.033254ms)

                                                
                                                
-- stdout --
	* [functional-471703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:34:28.623177   31203 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:34:28.623433   31203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:34:28.623451   31203 out.go:374] Setting ErrFile to fd 2...
	I1124 13:34:28.623458   31203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:34:28.623944   31203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:34:28.624610   31203 out.go:368] Setting JSON to false
	I1124 13:34:28.625770   31203 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4620,"bootTime":1763986649,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 13:34:28.625918   31203 start.go:143] virtualization:  
	I1124 13:34:28.629647   31203 out.go:179] * [functional-471703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 13:34:28.633709   31203 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:34:28.633941   31203 notify.go:221] Checking for updates...
	I1124 13:34:28.640012   31203 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:34:28.642801   31203 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 13:34:28.645741   31203 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 13:34:28.648743   31203 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 13:34:28.651578   31203 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:34:28.655138   31203 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:34:28.655856   31203 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:34:28.694475   31203 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:34:28.694624   31203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:34:28.761454   31203 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 13:34:28.752276577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:34:28.761564   31203 docker.go:319] overlay module found
	I1124 13:34:28.766416   31203 out.go:179] * Using the docker driver based on existing profile
	I1124 13:34:28.769226   31203 start.go:309] selected driver: docker
	I1124 13:34:28.769245   31203 start.go:927] validating driver "docker" against &{Name:functional-471703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-471703 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:34:28.769345   31203 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:34:28.772876   31203 out.go:203] 
	W1124 13:34:28.775671   31203 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 13:34:28.778471   31203 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471703 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-471703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-471703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (201.727494ms)

                                                
                                                
-- stdout --
	* [functional-471703] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:34:30.158442   31539 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:34:30.158669   31539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:34:30.158697   31539 out.go:374] Setting ErrFile to fd 2...
	I1124 13:34:30.158717   31539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:34:30.159139   31539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:34:30.159602   31539 out.go:368] Setting JSON to false
	I1124 13:34:30.160530   31539 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4622,"bootTime":1763986649,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 13:34:30.160636   31539 start.go:143] virtualization:  
	I1124 13:34:30.164099   31539 out.go:179] * [functional-471703] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1124 13:34:30.167245   31539 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:34:30.167336   31539 notify.go:221] Checking for updates...
	I1124 13:34:30.173016   31539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:34:30.175927   31539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 13:34:30.178827   31539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 13:34:30.181926   31539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 13:34:30.184908   31539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:34:30.188450   31539 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:34:30.189109   31539 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:34:30.219848   31539 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:34:30.219964   31539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:34:30.277848   31539 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 13:34:30.267899277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:34:30.277947   31539 docker.go:319] overlay module found
	I1124 13:34:30.283169   31539 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 13:34:30.289227   31539 start.go:309] selected driver: docker
	I1124 13:34:30.289253   31539 start.go:927] validating driver "docker" against &{Name:functional-471703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-471703 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:34:30.289344   31539 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:34:30.292523   31539 out.go:203] 
	W1124 13:34:30.294773   31539 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 13:34:30.297193   31539 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [62fddaeb-cb2e-4b3a-b5a4-233e0d1e9227] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0036387s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-471703 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-471703 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-471703 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-471703 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b13ad070-74f3-4c14-be0a-4daca67cc944] Pending
helpers_test.go:352: "sp-pod" [b13ad070-74f3-4c14-be0a-4daca67cc944] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b13ad070-74f3-4c14-be0a-4daca67cc944] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003881906s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-471703 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-471703 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-471703 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [40d14b8a-b55d-4a53-b512-003096e9d174] Pending
helpers_test.go:352: "sp-pod" [40d14b8a-b55d-4a53-b512-003096e9d174] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00463285s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-471703 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh -n functional-471703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 cp functional-471703:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2187526834/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh -n functional-471703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh -n functional-471703 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4611/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "sudo cat /etc/test/nested/copy/4611/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4611.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "sudo cat /etc/ssl/certs/4611.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4611.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "sudo cat /usr/share/ca-certificates/4611.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/46112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "sudo cat /etc/ssl/certs/46112.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/46112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "sudo cat /usr/share/ca-certificates/46112.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-471703 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 ssh "sudo systemctl is-active docker": exit status 1 (536.537622ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 ssh "sudo systemctl is-active containerd": exit status 1 (415.020309ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
functional_test.go:2293: (dbg) Done: out/minikube-linux-arm64 license: (2.466458228s)
--- PASS: TestFunctional/parallel/License (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-471703 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-471703 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-471703 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 26430: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-471703 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-471703 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-471703 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [021aebb0-0522-4419-bc64-8a743894fde9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [021aebb0-0522-4419-bc64-8a743894fde9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.005652307s
I1124 13:24:10.142234    4611 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-471703 version -o=json --components: (1.123081913s)
--- PASS: TestFunctional/parallel/Version/components (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-471703 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-471703 image ls --format short --alsologtostderr:
I1124 13:34:40.875348   32230 out.go:360] Setting OutFile to fd 1 ...
I1124 13:34:40.875532   32230 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:34:40.875545   32230 out.go:374] Setting ErrFile to fd 2...
I1124 13:34:40.875580   32230 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:34:40.875880   32230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
I1124 13:34:40.876530   32230 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:34:40.876685   32230 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:34:40.877276   32230 cli_runner.go:164] Run: docker container inspect functional-471703 --format={{.State.Status}}
I1124 13:34:40.897262   32230 ssh_runner.go:195] Run: systemctl --version
I1124 13:34:40.897377   32230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
I1124 13:34:40.915069   32230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
I1124 13:34:41.030321   32230 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-471703 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-471703  │ 18098c56d77bd │ 1.64MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-471703 image ls --format table --alsologtostderr:
I1124 13:34:45.660738   33334 out.go:360] Setting OutFile to fd 1 ...
I1124 13:34:45.660912   33334 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:34:45.660942   33334 out.go:374] Setting ErrFile to fd 2...
I1124 13:34:45.660963   33334 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:34:45.661252   33334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
I1124 13:34:45.661884   33334 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:34:45.662050   33334 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:34:45.662645   33334 cli_runner.go:164] Run: docker container inspect functional-471703 --format={{.State.Status}}
I1124 13:34:45.689736   33334 ssh_runner.go:195] Run: systemctl --version
I1124 13:34:45.689801   33334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
I1124 13:34:45.721106   33334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
I1124 13:34:45.830549   33334 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-471703 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"]
,"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repo
Tags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb423
3d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pau
se:3.3"],"size":"487479"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7
066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-471703 image ls --format json --alsologtostderr:
I1124 13:34:41.143261   32298 out.go:360] Setting OutFile to fd 1 ...
I1124 13:34:41.144733   32298 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:34:41.147404   32298 out.go:374] Setting ErrFile to fd 2...
I1124 13:34:41.147462   32298 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:34:41.147771   32298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
I1124 13:34:41.148538   32298 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:34:41.148715   32298 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:34:41.149464   32298 cli_runner.go:164] Run: docker container inspect functional-471703 --format={{.State.Status}}
I1124 13:34:41.172333   32298 ssh_runner.go:195] Run: systemctl --version
I1124 13:34:41.172386   32298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
I1124 13:34:41.194003   32298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
I1124 13:34:41.302922   32298 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-471703 image ls --format yaml --alsologtostderr:
- id: 71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1634527"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 18098c56d77bdf74187d512190e3f0d4fa4dbd0fd8cfd2764bb488d2dd91fc66
repoDigests:
- localhost/my-image@sha256:193cc23b41adb367696f7d920586095e2d67d08452f41b1d2be3b08a45678541
repoTags:
- localhost/my-image:functional-471703
size: "1640791"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: e30dc38b700bb5d24df864c4ab0160bf1ffb056ec323d38da82e6675c40af504
repoDigests:
- docker.io/library/3949b1ca6d6d7c70c7dde5ee7ce714d29544ef20ad938a6d3555839b62f5523c-tmp@sha256:568ac104ae58d84eb1582011affd25d676a6bb635363ea3a040825adee344cf7
repoTags: []
size: "1638179"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-471703 image ls --format yaml --alsologtostderr:
I1124 13:34:45.348731   33265 out.go:360] Setting OutFile to fd 1 ...
I1124 13:34:45.349271   33265 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:34:45.349306   33265 out.go:374] Setting ErrFile to fd 2...
I1124 13:34:45.349326   33265 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:34:45.349632   33265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
I1124 13:34:45.350317   33265 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:34:45.350520   33265 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:34:45.351125   33265 cli_runner.go:164] Run: docker container inspect functional-471703 --format={{.State.Status}}
I1124 13:34:45.376859   33265 ssh_runner.go:195] Run: systemctl --version
I1124 13:34:45.376917   33265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
I1124 13:34:45.404979   33265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
I1124 13:34:45.515136   33265 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 ssh pgrep buildkitd: exit status 1 (341.658573ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image build -t localhost/my-image:functional-471703 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-471703 image build -t localhost/my-image:functional-471703 testdata/build --alsologtostderr: (3.908057027s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-471703 image build -t localhost/my-image:functional-471703 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e30dc38b700
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-471703
--> 18098c56d77
Successfully tagged localhost/my-image:functional-471703
18098c56d77bdf74187d512190e3f0d4fa4dbd0fd8cfd2764bb488d2dd91fc66
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-471703 image build -t localhost/my-image:functional-471703 testdata/build --alsologtostderr:
I1124 13:34:41.795010   32495 out.go:360] Setting OutFile to fd 1 ...
I1124 13:34:41.795240   32495 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:34:41.795272   32495 out.go:374] Setting ErrFile to fd 2...
I1124 13:34:41.795293   32495 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:34:41.795740   32495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
I1124 13:34:41.796748   32495 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:34:41.798095   32495 config.go:182] Loaded profile config "functional-471703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:34:41.798828   32495 cli_runner.go:164] Run: docker container inspect functional-471703 --format={{.State.Status}}
I1124 13:34:41.826187   32495 ssh_runner.go:195] Run: systemctl --version
I1124 13:34:41.826240   32495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-471703
I1124 13:34:41.848260   32495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/functional-471703/id_rsa Username:docker}
I1124 13:34:41.956062   32495 build_images.go:162] Building image from path: /tmp/build.3697643624.tar
I1124 13:34:41.956131   32495 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 13:34:41.966396   32495 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3697643624.tar
I1124 13:34:41.971675   32495 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3697643624.tar: stat -c "%s %y" /var/lib/minikube/build/build.3697643624.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3697643624.tar': No such file or directory
I1124 13:34:41.971719   32495 ssh_runner.go:362] scp /tmp/build.3697643624.tar --> /var/lib/minikube/build/build.3697643624.tar (3072 bytes)
I1124 13:34:41.997739   32495 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3697643624
I1124 13:34:42.014132   32495 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3697643624 -xf /var/lib/minikube/build/build.3697643624.tar
I1124 13:34:42.027370   32495 crio.go:315] Building image: /var/lib/minikube/build/build.3697643624
I1124 13:34:42.027440   32495 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-471703 /var/lib/minikube/build/build.3697643624 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1124 13:34:45.561882   32495 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-471703 /var/lib/minikube/build/build.3697643624 --cgroup-manager=cgroupfs: (3.534416011s)
I1124 13:34:45.561961   32495 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3697643624
I1124 13:34:45.573082   32495 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3697643624.tar
I1124 13:34:45.587448   32495 build_images.go:218] Built localhost/my-image:functional-471703 from /tmp/build.3697643624.tar
I1124 13:34:45.587491   32495 build_images.go:134] succeeded building to: functional-471703
I1124 13:34:45.587497   32495 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-471703
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image rm kicbase/echo-server:functional-471703 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-471703 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.164.26 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-471703 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-471703 /tmp/TestFunctionalparallelMountCmdany-port807469004/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763990651029827728" to /tmp/TestFunctionalparallelMountCmdany-port807469004/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763990651029827728" to /tmp/TestFunctionalparallelMountCmdany-port807469004/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763990651029827728" to /tmp/TestFunctionalparallelMountCmdany-port807469004/001/test-1763990651029827728
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (477.530287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:24:11.508311    4611 retry.go:31] will retry after 661.659322ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 13:24 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 13:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 13:24 test-1763990651029827728
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh cat /mount-9p/test-1763990651029827728
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-471703 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [84b346f3-e234-434d-b850-692ad279829b] Pending
helpers_test.go:352: "busybox-mount" [84b346f3-e234-434d-b850-692ad279829b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [84b346f3-e234-434d-b850-692ad279829b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [84b346f3-e234-434d-b850-692ad279829b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003806453s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-471703 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471703 /tmp/TestFunctionalparallelMountCmdany-port807469004/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-471703 /tmp/TestFunctionalparallelMountCmdspecific-port1762126064/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (568.951949ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:24:19.166364    4611 retry.go:31] will retry after 461.042601ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471703 /tmp/TestFunctionalparallelMountCmdspecific-port1762126064/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 ssh "sudo umount -f /mount-9p": exit status 1 (360.687218ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-471703 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471703 /tmp/TestFunctionalparallelMountCmdspecific-port1762126064/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-471703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2246723281/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-471703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2246723281/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-471703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2246723281/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-471703 ssh "findmnt -T" /mount1: exit status 1 (1.008228174s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:24:21.953216    4611 retry.go:31] will retry after 577.921331ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-471703 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2246723281/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2246723281/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-471703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2246723281/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "368.338931ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "51.90566ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "361.172843ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.246653ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-471703 service list: (1.327269836s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-471703 service list -o json
2025/11/24 13:34:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-471703 service list -o json: (1.369333991s)
functional_test.go:1504: Took "1.369430714s" to run "out/minikube-linux-arm64 -p functional-471703 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-471703
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-471703
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-471703
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1124 13:37:05.888126    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m27.904056095s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (208.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 kubectl -- rollout status deployment/busybox: (5.040252449s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-7tglz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-8kznk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-d5pf6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-7tglz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-8kznk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-d5pf6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-7tglz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-8kznk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-d5pf6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-7tglz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-7tglz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-8kznk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-8kznk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-d5pf6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 kubectl -- exec busybox-7b57f96db7-d5pf6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 node add --alsologtostderr -v 5
E1124 13:38:28.960431    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:00.799534    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:00.805855    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:00.817195    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:00.838661    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:00.880012    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:00.961467    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:01.123024    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:01.444549    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:02.086595    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:03.368773    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:05.930194    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:11.051766    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:39:21.293183    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 node add --alsologtostderr -v 5: (58.398048846s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5: (1.076509031s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-040749 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.054915047s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 status --output json --alsologtostderr -v 5: (1.076322426s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp testdata/cp-test.txt ha-040749:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile435869207/001/cp-test_ha-040749.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749:/home/docker/cp-test.txt ha-040749-m02:/home/docker/cp-test_ha-040749_ha-040749-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m02 "sudo cat /home/docker/cp-test_ha-040749_ha-040749-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749:/home/docker/cp-test.txt ha-040749-m03:/home/docker/cp-test_ha-040749_ha-040749-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m03 "sudo cat /home/docker/cp-test_ha-040749_ha-040749-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749:/home/docker/cp-test.txt ha-040749-m04:/home/docker/cp-test_ha-040749_ha-040749-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m04 "sudo cat /home/docker/cp-test_ha-040749_ha-040749-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp testdata/cp-test.txt ha-040749-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile435869207/001/cp-test_ha-040749-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m02:/home/docker/cp-test.txt ha-040749:/home/docker/cp-test_ha-040749-m02_ha-040749.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749 "sudo cat /home/docker/cp-test_ha-040749-m02_ha-040749.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m02:/home/docker/cp-test.txt ha-040749-m03:/home/docker/cp-test_ha-040749-m02_ha-040749-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m03 "sudo cat /home/docker/cp-test_ha-040749-m02_ha-040749-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m02:/home/docker/cp-test.txt ha-040749-m04:/home/docker/cp-test_ha-040749-m02_ha-040749-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m04 "sudo cat /home/docker/cp-test_ha-040749-m02_ha-040749-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp testdata/cp-test.txt ha-040749-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile435869207/001/cp-test_ha-040749-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m03:/home/docker/cp-test.txt ha-040749:/home/docker/cp-test_ha-040749-m03_ha-040749.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749 "sudo cat /home/docker/cp-test_ha-040749-m03_ha-040749.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m03:/home/docker/cp-test.txt ha-040749-m02:/home/docker/cp-test_ha-040749-m03_ha-040749-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m02 "sudo cat /home/docker/cp-test_ha-040749-m03_ha-040749-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m03:/home/docker/cp-test.txt ha-040749-m04:/home/docker/cp-test_ha-040749-m03_ha-040749-m04.txt
E1124 13:39:41.775471    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m04 "sudo cat /home/docker/cp-test_ha-040749-m03_ha-040749-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp testdata/cp-test.txt ha-040749-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile435869207/001/cp-test_ha-040749-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m04:/home/docker/cp-test.txt ha-040749:/home/docker/cp-test_ha-040749-m04_ha-040749.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749 "sudo cat /home/docker/cp-test_ha-040749-m04_ha-040749.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m04:/home/docker/cp-test.txt ha-040749-m02:/home/docker/cp-test_ha-040749-m04_ha-040749-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m02 "sudo cat /home/docker/cp-test_ha-040749-m04_ha-040749-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 cp ha-040749-m04:/home/docker/cp-test.txt ha-040749-m03:/home/docker/cp-test_ha-040749-m04_ha-040749-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 ssh -n ha-040749-m03 "sudo cat /home/docker/cp-test_ha-040749-m04_ha-040749-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 node stop m02 --alsologtostderr -v 5: (12.1357262s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5: exit status 7 (1.266117273s)

                                                
                                                
-- stdout --
	ha-040749
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-040749-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-040749-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-040749-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:39:59.997708   47938 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:39:59.997852   47938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:39:59.997858   47938 out.go:374] Setting ErrFile to fd 2...
	I1124 13:39:59.997864   47938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:39:59.998156   47938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:39:59.998447   47938 out.go:368] Setting JSON to false
	I1124 13:39:59.998541   47938 notify.go:221] Checking for updates...
	I1124 13:39:59.999733   47938 mustload.go:66] Loading cluster: ha-040749
	I1124 13:40:00.000232   47938 config.go:182] Loaded profile config "ha-040749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:40:00.000247   47938 status.go:174] checking status of ha-040749 ...
	I1124 13:40:00.000860   47938 cli_runner.go:164] Run: docker container inspect ha-040749 --format={{.State.Status}}
	I1124 13:40:00.191211   47938 status.go:371] ha-040749 host status = "Running" (err=<nil>)
	I1124 13:40:00.191256   47938 host.go:66] Checking if "ha-040749" exists ...
	I1124 13:40:00.192172   47938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-040749
	I1124 13:40:00.288431   47938 host.go:66] Checking if "ha-040749" exists ...
	I1124 13:40:00.288804   47938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:40:00.288876   47938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-040749
	I1124 13:40:00.383222   47938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/ha-040749/id_rsa Username:docker}
	I1124 13:40:00.607640   47938 ssh_runner.go:195] Run: systemctl --version
	I1124 13:40:00.616358   47938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:40:00.648500   47938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:40:00.721499   47938 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-24 13:40:00.705584232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:40:00.722127   47938 kubeconfig.go:125] found "ha-040749" server: "https://192.168.49.254:8443"
	I1124 13:40:00.722171   47938 api_server.go:166] Checking apiserver status ...
	I1124 13:40:00.722221   47938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:40:00.736359   47938 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1229/cgroup
	I1124 13:40:00.748322   47938 api_server.go:182] apiserver freezer: "8:freezer:/docker/b7e3e30c7826f2f1cf5476afd89592894714f7d4389556a95c54fc63bca6521e/crio/crio-686d8f76385ffb955b6501d85250377220d028479292aee6759b10f53fd51b45"
	I1124 13:40:00.748401   47938 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b7e3e30c7826f2f1cf5476afd89592894714f7d4389556a95c54fc63bca6521e/crio/crio-686d8f76385ffb955b6501d85250377220d028479292aee6759b10f53fd51b45/freezer.state
	I1124 13:40:00.758293   47938 api_server.go:204] freezer state: "THAWED"
	I1124 13:40:00.758326   47938 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 13:40:00.767863   47938 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 13:40:00.767896   47938 status.go:463] ha-040749 apiserver status = Running (err=<nil>)
	I1124 13:40:00.767908   47938 status.go:176] ha-040749 status: &{Name:ha-040749 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:40:00.767950   47938 status.go:174] checking status of ha-040749-m02 ...
	I1124 13:40:00.768333   47938 cli_runner.go:164] Run: docker container inspect ha-040749-m02 --format={{.State.Status}}
	I1124 13:40:00.788392   47938 status.go:371] ha-040749-m02 host status = "Stopped" (err=<nil>)
	I1124 13:40:00.788416   47938 status.go:384] host is not running, skipping remaining checks
	I1124 13:40:00.788424   47938 status.go:176] ha-040749-m02 status: &{Name:ha-040749-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:40:00.788446   47938 status.go:174] checking status of ha-040749-m03 ...
	I1124 13:40:00.788787   47938 cli_runner.go:164] Run: docker container inspect ha-040749-m03 --format={{.State.Status}}
	I1124 13:40:00.814669   47938 status.go:371] ha-040749-m03 host status = "Running" (err=<nil>)
	I1124 13:40:00.814694   47938 host.go:66] Checking if "ha-040749-m03" exists ...
	I1124 13:40:00.814999   47938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-040749-m03
	I1124 13:40:00.835334   47938 host.go:66] Checking if "ha-040749-m03" exists ...
	I1124 13:40:00.835715   47938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:40:00.835766   47938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-040749-m03
	I1124 13:40:00.858144   47938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/ha-040749-m03/id_rsa Username:docker}
	I1124 13:40:00.964897   47938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:40:00.980144   47938 kubeconfig.go:125] found "ha-040749" server: "https://192.168.49.254:8443"
	I1124 13:40:00.980175   47938 api_server.go:166] Checking apiserver status ...
	I1124 13:40:00.980217   47938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:40:00.991914   47938 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	I1124 13:40:01.001885   47938 api_server.go:182] apiserver freezer: "8:freezer:/docker/fd622aaf73a4218fbe556026fd114e6d8233b1b9b9cd7bfe59f11cc73b017075/crio/crio-42113f5668cf6919b6505b02725016b8a1dd628d5d8f8da7ad3f185442a9dbf4"
	I1124 13:40:01.001978   47938 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fd622aaf73a4218fbe556026fd114e6d8233b1b9b9cd7bfe59f11cc73b017075/crio/crio-42113f5668cf6919b6505b02725016b8a1dd628d5d8f8da7ad3f185442a9dbf4/freezer.state
	I1124 13:40:01.011750   47938 api_server.go:204] freezer state: "THAWED"
	I1124 13:40:01.011786   47938 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 13:40:01.020702   47938 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 13:40:01.020744   47938 status.go:463] ha-040749-m03 apiserver status = Running (err=<nil>)
	I1124 13:40:01.020754   47938 status.go:176] ha-040749-m03 status: &{Name:ha-040749-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:40:01.020774   47938 status.go:174] checking status of ha-040749-m04 ...
	I1124 13:40:01.021106   47938 cli_runner.go:164] Run: docker container inspect ha-040749-m04 --format={{.State.Status}}
	I1124 13:40:01.039534   47938 status.go:371] ha-040749-m04 host status = "Running" (err=<nil>)
	I1124 13:40:01.039564   47938 host.go:66] Checking if "ha-040749-m04" exists ...
	I1124 13:40:01.039882   47938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-040749-m04
	I1124 13:40:01.058752   47938 host.go:66] Checking if "ha-040749-m04" exists ...
	I1124 13:40:01.059061   47938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:40:01.059108   47938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-040749-m04
	I1124 13:40:01.077898   47938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/ha-040749-m04/id_rsa Username:docker}
	I1124 13:40:01.184071   47938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:40:01.206726   47938 status.go:176] ha-040749-m04 status: &{Name:ha-040749-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 node start m02 --alsologtostderr -v 5
E1124 13:40:22.737526    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 node start m02 --alsologtostderr -v 5: (29.878787036s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5: (1.419783984s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.480035171s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 stop --alsologtostderr -v 5: (26.906619531s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 start --wait true --alsologtostderr -v 5
E1124 13:41:44.659091    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:42:05.886986    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 start --wait true --alsologtostderr -v 5: (1m47.178852634s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 node delete m03 --alsologtostderr -v 5: (11.356040352s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 stop --alsologtostderr -v 5: (35.946688659s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5: exit status 7 (119.316096ms)

                                                
                                                
-- stdout --
	ha-040749
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-040749-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-040749-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:43:38.398129   59822 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:43:38.398277   59822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:43:38.398289   59822 out.go:374] Setting ErrFile to fd 2...
	I1124 13:43:38.398309   59822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:43:38.399208   59822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:43:38.399492   59822 out.go:368] Setting JSON to false
	I1124 13:43:38.399555   59822 mustload.go:66] Loading cluster: ha-040749
	I1124 13:43:38.399647   59822 notify.go:221] Checking for updates...
	I1124 13:43:38.400029   59822 config.go:182] Loaded profile config "ha-040749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:43:38.400072   59822 status.go:174] checking status of ha-040749 ...
	I1124 13:43:38.400664   59822 cli_runner.go:164] Run: docker container inspect ha-040749 --format={{.State.Status}}
	I1124 13:43:38.420590   59822 status.go:371] ha-040749 host status = "Stopped" (err=<nil>)
	I1124 13:43:38.420609   59822 status.go:384] host is not running, skipping remaining checks
	I1124 13:43:38.420615   59822 status.go:176] ha-040749 status: &{Name:ha-040749 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:43:38.420648   59822 status.go:174] checking status of ha-040749-m02 ...
	I1124 13:43:38.420950   59822 cli_runner.go:164] Run: docker container inspect ha-040749-m02 --format={{.State.Status}}
	I1124 13:43:38.451231   59822 status.go:371] ha-040749-m02 host status = "Stopped" (err=<nil>)
	I1124 13:43:38.451252   59822 status.go:384] host is not running, skipping remaining checks
	I1124 13:43:38.451259   59822 status.go:176] ha-040749-m02 status: &{Name:ha-040749-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:43:38.451278   59822 status.go:174] checking status of ha-040749-m04 ...
	I1124 13:43:38.451616   59822 cli_runner.go:164] Run: docker container inspect ha-040749-m04 --format={{.State.Status}}
	I1124 13:43:38.468572   59822 status.go:371] ha-040749-m04 host status = "Stopped" (err=<nil>)
	I1124 13:43:38.468593   59822 status.go:384] host is not running, skipping remaining checks
	I1124 13:43:38.468599   59822 status.go:176] ha-040749-m04 status: &{Name:ha-040749-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1124 13:44:00.799652    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:44:28.501278    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m6.343382076s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (93.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 node add --control-plane --alsologtostderr -v 5: (1m32.462830483s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-040749 status --alsologtostderr -v 5: (1.121886533s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (93.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.109941277s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-921919 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1124 13:47:05.886973    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-921919 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m21.687942472s)
--- PASS: TestJSONOutput/start/Command (81.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-921919 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-921919 --output=json --user=testUser: (5.846349559s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-841765 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-841765 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (91.940108ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"32de9e59-26f0-4adf-ba83-62bed18eec9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-841765] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1139211e-7aa0-44cf-8c03-42e50761a2f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21932"}}
	{"specversion":"1.0","id":"74e7ec77-dfa2-450d-a764-3b0a1078812f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ec8de1b2-213e-4965-98f4-e171a8a9a8e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig"}}
	{"specversion":"1.0","id":"7dd83007-e470-4f4d-a9db-a54537800510","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube"}}
	{"specversion":"1.0","id":"e25c1bdd-6bed-4e12-934f-a625af3860ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8f43644e-1d6d-44c8-996f-fe0dbcd532fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1469a1b9-15c5-401e-9d9f-69b3901848a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-841765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-841765
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (69.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-139160 --network=
E1124 13:49:00.799630    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-139160 --network=: (1m7.443018721s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-139160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-139160
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-139160: (2.159138011s)
--- PASS: TestKicCustomNetwork/create_custom_network (69.63s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-056711 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-056711 --network=bridge: (31.602422716s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-056711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-056711
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-056711: (2.124012119s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.76s)

                                                
                                    
x
+
TestKicExistingNetwork (34.32s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1124 13:49:51.051922    4611 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1124 13:49:51.076242    4611 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1124 13:49:51.076326    4611 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1124 13:49:51.076344    4611 cli_runner.go:164] Run: docker network inspect existing-network
W1124 13:49:51.097891    4611 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1124 13:49:51.097921    4611 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1124 13:49:51.097948    4611 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1124 13:49:51.098050    4611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 13:49:51.115934    4611 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3087ee9f269 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:07:60:94:e6:54} reservation:<nil>}
I1124 13:49:51.116252    4611 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e25ff0}
I1124 13:49:51.116283    4611 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1124 13:49:51.116334    4611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1124 13:49:51.177706    4611 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-736901 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-736901 --network=existing-network: (32.012802496s)
helpers_test.go:175: Cleaning up "existing-network-736901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-736901
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-736901: (2.14822535s)
I1124 13:50:25.355321    4611 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.32s)

                                                
                                    
x
+
TestKicCustomSubnet (37.92s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-094038 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-094038 --subnet=192.168.60.0/24: (35.682385012s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-094038 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-094038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-094038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-094038: (2.208401656s)
--- PASS: TestKicCustomSubnet (37.92s)

                                                
                                    
x
+
TestKicStaticIP (36.46s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-261337 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-261337 --static-ip=192.168.200.200: (34.151645988s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-261337 ip
helpers_test.go:175: Cleaning up "static-ip-261337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-261337
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-261337: (2.16673045s)
--- PASS: TestKicStaticIP (36.46s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-941218 --driver=docker  --container-runtime=crio
E1124 13:52:05.888424    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-941218 --driver=docker  --container-runtime=crio: (34.305462166s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-943729 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-943729 --driver=docker  --container-runtime=crio: (32.450009996s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-941218
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-943729
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-943729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-943729
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-943729: (2.135918584s)
helpers_test.go:175: Cleaning up "first-941218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-941218
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-941218: (2.068686903s)
--- PASS: TestMinikubeProfile (72.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-337233 --memory=3072 --mount-string /tmp/TestMountStartserial2675600920/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-337233 --memory=3072 --mount-string /tmp/TestMountStartserial2675600920/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.666142091s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-337233 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-339112 --memory=3072 --mount-string /tmp/TestMountStartserial2675600920/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-339112 --memory=3072 --mount-string /tmp/TestMountStartserial2675600920/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.909735675s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-339112 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-337233 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-337233 --alsologtostderr -v=5: (1.713783955s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-339112 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-339112
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-339112: (1.297233801s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-339112
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-339112: (7.004160842s)
--- PASS: TestMountStart/serial/RestartStopped (8.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-339112 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (135.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-179227 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1124 13:54:00.799516    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:55:08.961810    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:55:23.863115    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-179227 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m14.579309847s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (135.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-179227 -- rollout status deployment/busybox: (3.202461137s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- exec busybox-7b57f96db7-2tq2p -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- exec busybox-7b57f96db7-bjl2f -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- exec busybox-7b57f96db7-2tq2p -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- exec busybox-7b57f96db7-bjl2f -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- exec busybox-7b57f96db7-2tq2p -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- exec busybox-7b57f96db7-bjl2f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- exec busybox-7b57f96db7-2tq2p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- exec busybox-7b57f96db7-2tq2p -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- exec busybox-7b57f96db7-bjl2f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-179227 -- exec busybox-7b57f96db7-bjl2f -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-179227 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-179227 -v=5 --alsologtostderr: (57.651415774s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-179227 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp testdata/cp-test.txt multinode-179227:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp multinode-179227:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1726957110/001/cp-test_multinode-179227.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp multinode-179227:/home/docker/cp-test.txt multinode-179227-m02:/home/docker/cp-test_multinode-179227_multinode-179227-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m02 "sudo cat /home/docker/cp-test_multinode-179227_multinode-179227-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp multinode-179227:/home/docker/cp-test.txt multinode-179227-m03:/home/docker/cp-test_multinode-179227_multinode-179227-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m03 "sudo cat /home/docker/cp-test_multinode-179227_multinode-179227-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp testdata/cp-test.txt multinode-179227-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp multinode-179227-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1726957110/001/cp-test_multinode-179227-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp multinode-179227-m02:/home/docker/cp-test.txt multinode-179227:/home/docker/cp-test_multinode-179227-m02_multinode-179227.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227 "sudo cat /home/docker/cp-test_multinode-179227-m02_multinode-179227.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp multinode-179227-m02:/home/docker/cp-test.txt multinode-179227-m03:/home/docker/cp-test_multinode-179227-m02_multinode-179227-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m03 "sudo cat /home/docker/cp-test_multinode-179227-m02_multinode-179227-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp testdata/cp-test.txt multinode-179227-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp multinode-179227-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1726957110/001/cp-test_multinode-179227-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp multinode-179227-m03:/home/docker/cp-test.txt multinode-179227:/home/docker/cp-test_multinode-179227-m03_multinode-179227.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227 "sudo cat /home/docker/cp-test_multinode-179227-m03_multinode-179227.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 cp multinode-179227-m03:/home/docker/cp-test.txt multinode-179227-m02:/home/docker/cp-test_multinode-179227-m03_multinode-179227-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 ssh -n multinode-179227-m02 "sudo cat /home/docker/cp-test_multinode-179227-m03_multinode-179227-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-179227 node stop m03: (1.347415919s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-179227 status: exit status 7 (557.574801ms)

                                                
                                                
-- stdout --
	multinode-179227
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-179227-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-179227-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-179227 status --alsologtostderr: exit status 7 (549.958928ms)

                                                
                                                
-- stdout --
	multinode-179227
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-179227-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-179227-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:56:56.552097  110131 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:56:56.552214  110131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:56:56.552224  110131 out.go:374] Setting ErrFile to fd 2...
	I1124 13:56:56.552230  110131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:56:56.552602  110131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:56:56.552843  110131 out.go:368] Setting JSON to false
	I1124 13:56:56.552880  110131 mustload.go:66] Loading cluster: multinode-179227
	I1124 13:56:56.553566  110131 config.go:182] Loaded profile config "multinode-179227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:56:56.553585  110131 status.go:174] checking status of multinode-179227 ...
	I1124 13:56:56.554314  110131 cli_runner.go:164] Run: docker container inspect multinode-179227 --format={{.State.Status}}
	I1124 13:56:56.556022  110131 notify.go:221] Checking for updates...
	I1124 13:56:56.577246  110131 status.go:371] multinode-179227 host status = "Running" (err=<nil>)
	I1124 13:56:56.577267  110131 host.go:66] Checking if "multinode-179227" exists ...
	I1124 13:56:56.577597  110131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-179227
	I1124 13:56:56.609306  110131 host.go:66] Checking if "multinode-179227" exists ...
	I1124 13:56:56.609673  110131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:56:56.609742  110131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-179227
	I1124 13:56:56.629085  110131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/multinode-179227/id_rsa Username:docker}
	I1124 13:56:56.736846  110131 ssh_runner.go:195] Run: systemctl --version
	I1124 13:56:56.743841  110131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:56:56.757181  110131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:56:56.826210  110131 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 13:56:56.816402523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:56:56.826819  110131 kubeconfig.go:125] found "multinode-179227" server: "https://192.168.67.2:8443"
	I1124 13:56:56.826853  110131 api_server.go:166] Checking apiserver status ...
	I1124 13:56:56.826901  110131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:56:56.838594  110131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup
	I1124 13:56:56.847074  110131 api_server.go:182] apiserver freezer: "8:freezer:/docker/083587b57e5f9044cc675d06edbf855b2c5c6b148838eea2b55b232a5e8719c1/crio/crio-d4553b344e20b2cc12be0b3ec8f37e4f2a608dd11cb8230b94b155a1ba03ad8b"
	I1124 13:56:56.847144  110131 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/083587b57e5f9044cc675d06edbf855b2c5c6b148838eea2b55b232a5e8719c1/crio/crio-d4553b344e20b2cc12be0b3ec8f37e4f2a608dd11cb8230b94b155a1ba03ad8b/freezer.state
	I1124 13:56:56.855055  110131 api_server.go:204] freezer state: "THAWED"
	I1124 13:56:56.855085  110131 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1124 13:56:56.863489  110131 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1124 13:56:56.863517  110131 status.go:463] multinode-179227 apiserver status = Running (err=<nil>)
	I1124 13:56:56.863528  110131 status.go:176] multinode-179227 status: &{Name:multinode-179227 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:56:56.863544  110131 status.go:174] checking status of multinode-179227-m02 ...
	I1124 13:56:56.863875  110131 cli_runner.go:164] Run: docker container inspect multinode-179227-m02 --format={{.State.Status}}
	I1124 13:56:56.880490  110131 status.go:371] multinode-179227-m02 host status = "Running" (err=<nil>)
	I1124 13:56:56.880514  110131 host.go:66] Checking if "multinode-179227-m02" exists ...
	I1124 13:56:56.880843  110131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-179227-m02
	I1124 13:56:56.897904  110131 host.go:66] Checking if "multinode-179227-m02" exists ...
	I1124 13:56:56.898267  110131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:56:56.898335  110131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-179227-m02
	I1124 13:56:56.916006  110131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21932-2805/.minikube/machines/multinode-179227-m02/id_rsa Username:docker}
	I1124 13:56:57.018221  110131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:56:57.033933  110131 status.go:176] multinode-179227-m02 status: &{Name:multinode-179227-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:56:57.033974  110131 status.go:174] checking status of multinode-179227-m03 ...
	I1124 13:56:57.034332  110131 cli_runner.go:164] Run: docker container inspect multinode-179227-m03 --format={{.State.Status}}
	I1124 13:56:57.052820  110131 status.go:371] multinode-179227-m03 host status = "Stopped" (err=<nil>)
	I1124 13:56:57.052842  110131 status.go:384] host is not running, skipping remaining checks
	I1124 13:56:57.052849  110131 status.go:176] multinode-179227-m03 status: &{Name:multinode-179227-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-179227 node start m03 -v=5 --alsologtostderr: (7.251645189s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-179227
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-179227
E1124 13:57:05.887345    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-179227: (25.093320449s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-179227 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-179227 --wait=true -v=5 --alsologtostderr: (47.042904129s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-179227
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-179227 node delete m03: (4.960216815s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-179227 stop: (23.779839635s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-179227 status: exit status 7 (96.395614ms)

                                                
                                                
-- stdout --
	multinode-179227
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-179227-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-179227 status --alsologtostderr: exit status 7 (87.083621ms)

                                                
                                                
-- stdout --
	multinode-179227
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-179227-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:58:46.934313  117995 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:58:46.934500  117995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:58:46.934535  117995 out.go:374] Setting ErrFile to fd 2...
	I1124 13:58:46.934558  117995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:58:46.934864  117995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 13:58:46.935085  117995 out.go:368] Setting JSON to false
	I1124 13:58:46.935140  117995 mustload.go:66] Loading cluster: multinode-179227
	I1124 13:58:46.935693  117995 config.go:182] Loaded profile config "multinode-179227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:58:46.935749  117995 status.go:174] checking status of multinode-179227 ...
	I1124 13:58:46.936322  117995 cli_runner.go:164] Run: docker container inspect multinode-179227 --format={{.State.Status}}
	I1124 13:58:46.935189  117995 notify.go:221] Checking for updates...
	I1124 13:58:46.956675  117995 status.go:371] multinode-179227 host status = "Stopped" (err=<nil>)
	I1124 13:58:46.956706  117995 status.go:384] host is not running, skipping remaining checks
	I1124 13:58:46.956713  117995 status.go:176] multinode-179227 status: &{Name:multinode-179227 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:58:46.956759  117995 status.go:174] checking status of multinode-179227-m02 ...
	I1124 13:58:46.957071  117995 cli_runner.go:164] Run: docker container inspect multinode-179227-m02 --format={{.State.Status}}
	I1124 13:58:46.974178  117995 status.go:371] multinode-179227-m02 host status = "Stopped" (err=<nil>)
	I1124 13:58:46.974198  117995 status.go:384] host is not running, skipping remaining checks
	I1124 13:58:46.974205  117995 status.go:176] multinode-179227-m02 status: &{Name:multinode-179227-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-179227 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1124 13:59:00.800191    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-179227 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (49.523202202s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-179227 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-179227
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-179227-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-179227-m02 --driver=docker  --container-runtime=crio: exit status 14 (96.641711ms)

                                                
                                                
-- stdout --
	* [multinode-179227-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-179227-m02' is duplicated with machine name 'multinode-179227-m02' in profile 'multinode-179227'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-179227-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-179227-m03 --driver=docker  --container-runtime=crio: (36.823337456s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-179227
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-179227: exit status 80 (351.404058ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-179227 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-179227-m03 already exists in multinode-179227-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-179227-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-179227-m03: (2.092015003s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.42s)

                                                
                                    
x
+
TestPreload (160.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-822051 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-822051 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m5.798252283s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-822051 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-822051 image pull gcr.io/k8s-minikube/busybox: (2.30599085s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-822051
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-822051: (5.87459498s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-822051 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1124 14:02:05.888199    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-822051 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m23.703081393s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-822051 image list
helpers_test.go:175: Cleaning up "test-preload-822051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-822051
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-822051: (2.478500553s)
--- PASS: TestPreload (160.39s)

                                                
                                    
x
+
TestScheduledStopUnix (110.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-782737 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-782737 --memory=3072 --driver=docker  --container-runtime=crio: (32.787366628s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-782737 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 14:03:34.148766  132034 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:03:34.148944  132034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:03:34.148969  132034 out.go:374] Setting ErrFile to fd 2...
	I1124 14:03:34.148989  132034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:03:34.149358  132034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:03:34.149945  132034 out.go:368] Setting JSON to false
	I1124 14:03:34.150128  132034 mustload.go:66] Loading cluster: scheduled-stop-782737
	I1124 14:03:34.150516  132034 config.go:182] Loaded profile config "scheduled-stop-782737": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:03:34.150626  132034 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/config.json ...
	I1124 14:03:34.150838  132034 mustload.go:66] Loading cluster: scheduled-stop-782737
	I1124 14:03:34.150996  132034 config.go:182] Loaded profile config "scheduled-stop-782737": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-782737 -n scheduled-stop-782737
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-782737 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 14:03:34.603449  132122 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:03:34.603585  132122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:03:34.603597  132122 out.go:374] Setting ErrFile to fd 2...
	I1124 14:03:34.603603  132122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:03:34.604339  132122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:03:34.604905  132122 out.go:368] Setting JSON to false
	I1124 14:03:34.605155  132122 daemonize_unix.go:73] killing process 132050 as it is an old scheduled stop
	I1124 14:03:34.605249  132122 mustload.go:66] Loading cluster: scheduled-stop-782737
	I1124 14:03:34.605678  132122 config.go:182] Loaded profile config "scheduled-stop-782737": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:03:34.605793  132122 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/config.json ...
	I1124 14:03:34.606022  132122 mustload.go:66] Loading cluster: scheduled-stop-782737
	I1124 14:03:34.606191  132122 config.go:182] Loaded profile config "scheduled-stop-782737": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:180: process 132050 is a zombie
I1124 14:03:34.610315    4611 retry.go:31] will retry after 63.733µs: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.611050    4611 retry.go:31] will retry after 161.765µs: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.612296    4611 retry.go:31] will retry after 281.311µs: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.613411    4611 retry.go:31] will retry after 467.068µs: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.614520    4611 retry.go:31] will retry after 274.95µs: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.615606    4611 retry.go:31] will retry after 660.878µs: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.616699    4611 retry.go:31] will retry after 1.10127ms: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.618876    4611 retry.go:31] will retry after 972.291µs: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.619957    4611 retry.go:31] will retry after 3.089189ms: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.624174    4611 retry.go:31] will retry after 2.392418ms: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.627512    4611 retry.go:31] will retry after 7.631488ms: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.635716    4611 retry.go:31] will retry after 6.627237ms: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.642971    4611 retry.go:31] will retry after 8.607706ms: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.652197    4611 retry.go:31] will retry after 17.206098ms: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.673013    4611 retry.go:31] will retry after 29.703912ms: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
I1124 14:03:34.703244    4611 retry.go:31] will retry after 50.883953ms: open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-782737 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-782737 -n scheduled-stop-782737
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-782737
E1124 14:04:00.800393    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-782737 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 14:04:00.856810  132484 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:04:00.856992  132484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:04:00.857023  132484 out.go:374] Setting ErrFile to fd 2...
	I1124 14:04:00.857044  132484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:04:00.857607  132484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:04:00.857918  132484 out.go:368] Setting JSON to false
	I1124 14:04:00.858061  132484 mustload.go:66] Loading cluster: scheduled-stop-782737
	I1124 14:04:00.858464  132484 config.go:182] Loaded profile config "scheduled-stop-782737": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:04:00.858573  132484 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/scheduled-stop-782737/config.json ...
	I1124 14:04:00.858796  132484 mustload.go:66] Loading cluster: scheduled-stop-782737
	I1124 14:04:00.858955  132484 config.go:182] Loaded profile config "scheduled-stop-782737": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-782737
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-782737: exit status 7 (65.876641ms)

                                                
                                                
-- stdout --
	scheduled-stop-782737
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-782737 -n scheduled-stop-782737
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-782737 -n scheduled-stop-782737: exit status 7 (70.716382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-782737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-782737
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-782737: (5.582091528s)
--- PASS: TestScheduledStopUnix (110.28s)

                                                
                                    
x
+
TestInsufficientStorage (13.47s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-059768 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-059768 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.864911335s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"493b4a00-602b-4a5f-96e8-4496d355b8e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-059768] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d42b295-91f4-45ae-80a7-d81a242d51b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21932"}}
	{"specversion":"1.0","id":"f73e02a2-1375-468d-a930-0d67a737da5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9b711f3b-1017-44dd-8ad1-1a8119ffa068","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig"}}
	{"specversion":"1.0","id":"b69a1c56-b95c-4ac1-9126-e7d4741591c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube"}}
	{"specversion":"1.0","id":"83885f0e-4ddd-4cee-9d77-71b5cd90581e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"86da2279-c153-4834-b22b-1ca91588d17e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"65353eeb-17d3-487f-a75d-7b0be71ade3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"720ddbd9-7a5b-4000-a75e-c1cb83662796","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fd3842e8-f59b-45f9-aba8-4c85e4abb2f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e039617a-6899-4aa8-871f-4734c6de812a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5c972e03-8f3a-4379-8bef-1f4cf125cac4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-059768\" primary control-plane node in \"insufficient-storage-059768\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d31baf4-7c63-44ab-8fb0-69e3caf08f50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c216ae0-0112-43cd-bfb5-2a5eaec79cdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb133dee-f8e2-4667-8272-455c99a6870d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-059768 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-059768 --output=json --layout=cluster: exit status 7 (296.701082ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-059768","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-059768","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 14:05:02.735878  134212 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-059768" does not appear in /home/jenkins/minikube-integration/21932-2805/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-059768 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-059768 --output=json --layout=cluster: exit status 7 (316.27715ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-059768","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-059768","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 14:05:03.053127  134278 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-059768" does not appear in /home/jenkins/minikube-integration/21932-2805/kubeconfig
	E1124 14:05:03.063509  134278 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/insufficient-storage-059768/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-059768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-059768
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-059768: (1.990717884s)
--- PASS: TestInsufficientStorage (13.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2715591739 start -p running-upgrade-668851 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2715591739 start -p running-upgrade-668851 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.844432647s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-668851 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-668851 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.569436607s)
helpers_test.go:175: Cleaning up "running-upgrade-668851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-668851
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-668851: (2.176213417s)
--- PASS: TestRunningBinaryUpgrade (61.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1124 14:07:05.886931    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.209460238s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-610110
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-610110: (1.33360753s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-610110 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-610110 status --format={{.Host}}: exit status 7 (71.949507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m34.949408512s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-610110 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (129.981976ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-610110] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-610110
	    minikube start -p kubernetes-upgrade-610110 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6101102 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-610110 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1124 14:12:03.865401    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:12:05.886808    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-610110 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.198939019s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-610110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-610110
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-610110: (2.184933835s)
--- PASS: TestKubernetesUpgrade (349.18s)

                                                
                                    
x
+
TestMissingContainerUpgrade (105.41s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1880681502 start -p missing-upgrade-593066 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1880681502 start -p missing-upgrade-593066 --memory=3072 --driver=docker  --container-runtime=crio: (59.825704778s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-593066
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-593066
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-593066 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-593066 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.813688539s)
helpers_test.go:175: Cleaning up "missing-upgrade-593066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-593066
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-593066: (1.951055241s)
--- PASS: TestMissingContainerUpgrade (105.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-637834 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-637834 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (96.730777ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-637834] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-637834 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-637834 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (48.549761291s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-637834 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (113.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-637834 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-637834 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m51.171571239s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-637834 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-637834 status -o json: exit status 2 (328.857934ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-637834","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-637834
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-637834: (2.007674624s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (113.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-637834 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-637834 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.740200869s)
--- PASS: TestNoKubernetes/serial/Start (7.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21932-2805/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-637834 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-637834 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.09185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (33.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-arm64 profile list: (18.997746489s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (14.398803607s)
--- PASS: TestNoKubernetes/serial/ProfileList (33.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-637834
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-637834: (1.286642027s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-637834 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-637834 --driver=docker  --container-runtime=crio: (9.015072084s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-637834 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-637834 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.699196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (8.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (8.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (53.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3673013936 start -p stopped-upgrade-189175 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1124 14:09:00.799515    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3673013936 start -p stopped-upgrade-189175 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.338758002s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3673013936 -p stopped-upgrade-189175 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3673013936 -p stopped-upgrade-189175 stop: (1.259345208s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-189175 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-189175 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.881111242s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (53.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-189175
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-189175: (1.199742779s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestPause/serial/Start (84.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-007087 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1124 14:11:48.964159    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-007087 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.093749436s)
--- PASS: TestPause/serial/Start (84.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-007087 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-007087 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.816483869s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-626991 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-626991 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (250.264776ms)

                                                
                                                
-- stdout --
	* [false-626991] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:12:58.238658  172010 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:12:58.239103  172010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:12:58.239113  172010 out.go:374] Setting ErrFile to fd 2...
	I1124 14:12:58.239118  172010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:12:58.239762  172010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2805/.minikube/bin
	I1124 14:12:58.240242  172010 out.go:368] Setting JSON to false
	I1124 14:12:58.241090  172010 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6930,"bootTime":1763986649,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1124 14:12:58.241149  172010 start.go:143] virtualization:  
	I1124 14:12:58.244648  172010 out.go:179] * [false-626991] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:12:58.248134  172010 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:12:58.248379  172010 notify.go:221] Checking for updates...
	I1124 14:12:58.253771  172010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:12:58.256690  172010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2805/kubeconfig
	I1124 14:12:58.259594  172010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2805/.minikube
	I1124 14:12:58.262537  172010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:12:58.265485  172010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:12:58.268889  172010 config.go:182] Loaded profile config "force-systemd-flag-928059": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:12:58.268991  172010 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:12:58.306617  172010 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:12:58.306797  172010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:12:58.405216  172010 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 14:12:58.392913453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:12:58.405315  172010 docker.go:319] overlay module found
	I1124 14:12:58.408314  172010 out.go:179] * Using the docker driver based on user configuration
	I1124 14:12:58.411186  172010 start.go:309] selected driver: docker
	I1124 14:12:58.411206  172010 start.go:927] validating driver "docker" against <nil>
	I1124 14:12:58.411220  172010 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:12:58.414692  172010 out.go:203] 
	W1124 14:12:58.417480  172010 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1124 14:12:58.420388  172010 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-626991 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-626991" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-626991" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-626991

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-626991"

                                                
                                                
----------------------- debugLogs end: false-626991 [took: 4.501191871s] --------------------------------
helpers_test.go:175: Cleaning up "false-626991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-626991
--- PASS: TestNetworkPlugins/group/false (4.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.011259081s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-706771 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1305436e-3503-4029-912d-8c8cf12da01f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1305436e-3503-4029-912d-8c8cf12da01f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005043253s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-706771 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-706771 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-706771 --alsologtostderr -v=3: (12.139186442s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706771 -n old-k8s-version-706771
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706771 -n old-k8s-version-706771: exit status 7 (78.510016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-706771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-706771 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.454154833s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706771 -n old-k8s-version-706771
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-54hbr" [72a01016-3826-4d99-9b43-7c88b607e64f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003881563s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-54hbr" [72a01016-3826-4d99-9b43-7c88b607e64f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004517329s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-706771 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-706771 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m18.294972728s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m29.320399715s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-444317 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [425d89e6-e7dd-4305-a272-badc5ebf1597] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [425d89e6-e7dd-4305-a272-badc5ebf1597] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.002816169s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-444317 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-444317 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-444317 --alsologtostderr -v=3: (12.034191849s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-444317 -n no-preload-444317
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-444317 -n no-preload-444317: exit status 7 (76.508519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-444317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 14:19:00.799813    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-444317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.396059367s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-444317 -n no-preload-444317
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-720293 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [512e7325-45b0-48e8-a89e-558464cf3040] Pending
helpers_test.go:352: "busybox" [512e7325-45b0-48e8-a89e-558464cf3040] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [512e7325-45b0-48e8-a89e-558464cf3040] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003058297s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-720293 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-720293 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-720293 --alsologtostderr -v=3: (11.997919327s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-720293 -n embed-certs-720293
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-720293 -n embed-certs-720293: exit status 7 (79.64498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-720293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-720293 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.694409588s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-720293 -n embed-certs-720293
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (58.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xdmjc" [0ca9e6a2-e143-4b08-bfaf-a541eb0f842b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004332223s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xdmjc" [0ca9e6a2-e143-4b08-bfaf-a541eb0f842b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003967382s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-444317 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-444317 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.27140813s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7rfrv" [54479f7d-df5f-4bdb-9bf0-fffe91f3f263] Running
E1124 14:20:32.094313    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:32.100597    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:32.111899    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:32.133212    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:32.174522    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:32.255874    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:32.417311    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:32.738545    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:33.380572    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:34.662644    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003003269s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7rfrv" [54479f7d-df5f-4bdb-9bf0-fffe91f3f263] Running
E1124 14:20:37.224994    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003395216s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-720293 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-720293 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 14:21:13.070452    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (35.269882225s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-152851 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cf205d1b-7448-4c54-94b9-88644eb3827e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cf205d1b-7448-4c54-94b9-88644eb3827e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00436315s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-152851 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-948249 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-948249 --alsologtostderr -v=3: (1.420891825s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948249 -n newest-cni-948249
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948249 -n newest-cni-948249: exit status 7 (90.546703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-948249 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-948249 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.794736881s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948249 -n newest-cni-948249
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-152851 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-152851 --alsologtostderr -v=3: (12.729801193s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-948249 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851: exit status 7 (139.443548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-152851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1124 14:21:54.032706    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (59.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-152851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (59.02414655s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-152851 -n default-k8s-diff-port-152851
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (59.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1124 14:22:05.887150    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.548982693s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-n9rgw" [81506cf9-bce8-4955-8685-686c2fe938fb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003282519s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-n9rgw" [81506cf9-bce8-4955-8685-686c2fe938fb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003514386s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-152851 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-152851 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1124 14:23:15.953995    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m23.426147103s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-626991 "pgrep -a kubelet"
I1124 14:23:23.903235    4611 config.go:182] Loaded profile config "auto-626991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-626991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kpfmb" [5046f7cf-47b3-4983-ae4d-60014a8ca895] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 14:23:25.766733    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:23:25.778321    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:23:25.789932    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:23:25.811521    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:23:25.853049    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:23:25.935794    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:23:26.097203    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:23:26.419037    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:23:27.060558    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:23:28.342340    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-kpfmb" [5046f7cf-47b3-4983-ae4d-60014a8ca895] Running
E1124 14:23:30.904651    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003742496s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-626991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1124 14:24:00.799825    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/functional-471703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:24:06.749853    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m16.355936084s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-c2khv" [7d336397-8e36-42dd-ae59-47c11c1ca183] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004012352s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-626991 "pgrep -a kubelet"
I1124 14:24:44.864469    4611 config.go:182] Loaded profile config "kindnet-626991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-626991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5w92q" [de2a750d-1c7c-4bfd-af28-6d1aeaf2f20b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 14:24:47.711509    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-5w92q" [de2a750d-1c7c-4bfd-af28-6d1aeaf2f20b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.00705052s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-626991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-v5thm" [edcfba56-f012-41b6-ab60-0a183f2417bb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003532454s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m7.567635137s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-626991 "pgrep -a kubelet"
I1124 14:25:23.360023    4611 config.go:182] Loaded profile config "calico-626991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-626991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cnpzs" [bc9164c6-3d87-4c14-ba9f-fadca3973c6c] Pending
helpers_test.go:352: "netcat-cd4db9dbf-cnpzs" [bc9164c6-3d87-4c14-ba9f-fadca3973c6c] Running
E1124 14:25:32.094357    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/old-k8s-version-706771/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003811962s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-626991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1124 14:26:09.634462    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:26:28.673822    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m14.789950109s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-626991 "pgrep -a kubelet"
E1124 14:26:28.683349    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:26:28.694718    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:26:28.716460    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:26:28.758786    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:26:28.841709    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1124 14:26:28.974514    4611 config.go:182] Loaded profile config "custom-flannel-626991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-626991 replace --force -f testdata/netcat-deployment.yaml
E1124 14:26:29.005164    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:26:29.326539    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jx86j" [4dace2cc-599d-4b00-bad7-8c047eef86b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 14:26:29.967894    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:26:31.249199    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:26:33.812676    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-jx86j" [4dace2cc-599d-4b00-bad7-8c047eef86b5] Running
E1124 14:26:38.934740    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003642609s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-626991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1124 14:27:05.887061    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/addons-647907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:27:09.664911    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/default-k8s-diff-port-152851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.684066951s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-626991 "pgrep -a kubelet"
I1124 14:27:17.948827    4611 config.go:182] Loaded profile config "enable-default-cni-626991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-626991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-szmmw" [498f1e00-c761-4a8d-b4dd-e67f30c1ebd6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-szmmw" [498f1e00-c761-4a8d-b4dd-e67f30c1ebd6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003567947s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-626991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-626991 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m20.406154816s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-gxspq" [a7e288e8-c82c-42ca-81e6-008b753e8a20] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004488866s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-626991 "pgrep -a kubelet"
I1124 14:28:13.853935    4611 config.go:182] Loaded profile config "flannel-626991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-626991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fz628" [e368cc84-0806-4f5a-91fc-f1415ead9b06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fz628" [e368cc84-0806-4f5a-91fc-f1415ead9b06] Running
E1124 14:28:24.153205    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:28:24.159551    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:28:24.171864    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:28:24.193449    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:28:24.234976    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:28:24.316368    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:28:24.477835    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:28:24.799420    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:28:25.441604    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/auto-626991/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:28:25.765229    4611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2805/.minikube/profiles/no-preload-444317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003139779s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-626991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-626991 "pgrep -a kubelet"
I1124 14:29:15.849230    4611 config.go:182] Loaded profile config "bridge-626991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-626991 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n68dw" [983a5bfa-a623-4730-8579-cf8a940ed99a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-n68dw" [983a5bfa-a623-4730-8579-cf8a940ed99a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003711607s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-626991 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-626991 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-367583 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-367583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-367583
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-799392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-799392
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-626991 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-626991" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-626991" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-626991

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-626991"

                                                
                                                
----------------------- debugLogs end: kubenet-626991 [took: 4.418332929s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-626991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-626991
--- SKIP: TestNetworkPlugins/group/kubenet (4.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-626991 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-626991" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-626991

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-626991" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-626991"

                                                
                                                
----------------------- debugLogs end: cilium-626991 [took: 5.731374582s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-626991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-626991
--- SKIP: TestNetworkPlugins/group/cilium (5.97s)

                                                
                                    
Copied to clipboard